Book Review: Artificial Unintelligence

Autumn 2020

It’s almost impossible to reconcile the state of AI as depicted in the media and the reality of AI that you encounter in the real world, isn’t it? On the one hand we seem to be just a few years from AI general intelligence that will outperform humans in every way. On the other, I can’t find a machine transcription service that copes with an even moderately-challenging accent.

Francois Chollet, the deep learning expert who created Keras, put it neatly in a Tweet:

“I'm so old I remember when fully autonomous cars were going to be ready for mass deployment by late 2017”  

Autonomous vehicles are one of the best examples of the tendency for technology to over-promise and under-deliver, and I’d put customer service chatbots in a similar category. The tools that we call AI, for now and the foreseeable future, can be extremely good at performing very specific tasks, but they don’t think. There is still nothing close to the AI “general intelligence” you might see in science fiction (the closest thing I know of is GPT-3), nor even a consensus on how (or if) building one might be possible.

What causes this gulf is partly the enthusiasm of people within technology excited about the potential of their tools, and partly the hyperbole of marketing departments and journalists who feed us a sci-fi vision of AI. What’s needed is a clear-headed view of the strengths and weaknesses of AI solutions, and the current state of the art, from someone who understands it but has enough distance to see it clearly. In Artificial Unintelligence Meredith Broussard gives us exactly that.

“…general AI is what we want…Narrow AI is what we have. It’s the difference between dreams and reality.”

This is not, to be clear, an anti-AI book. Broussard herself uses and develops AI tools, and she is enthusiastic about their potential; but she is also very aware of their limitations, and of the cultural issues within the world of technology which exacerbate the social impact of those limitations.

Technochauvinism

The core problem, she argues, is what she calls “technochauvinism”—the belief that technology is always the solution to every problem. This manifests in the regular spectacle of silicon valley entrepreneurs “inventing” products that have existed for years, such as “reusable tissues”.

That’s quite funny, and not doing anyone any harm, but the same outlook applied to machine learning approaches to problems that have a real impact in society can be much more damaging. If you believe, for instance, that AI is a better way to diagnose disease, or make decisions about early release of criminals, or to sift job applications.

“When you believe that a decision generated by a computer is better or fairer than a decision generated by a human, you stop questioning the validity of the inputs to the system.”

As Mark Smith pointed out in the interview featured earlier in this issue, the algorithms are not to blame when things go wrong. But technochauvinism can make us blind to the quality of the data we’re putting in, and to the decisions and biases that are baked into it.

Data

All AI tools require data, usually mountains of data. Where does it come from?

“…data always comes down to people counting things…data is socially constructed.”

This is often overlooked, but enormously important. First of all, it means that technochauvinists tend to prioritise things which are relatively easy to measure. It’s very difficult to measure quality, for instance, but very easy to measure popularity. To most of us it’s fairly obvious that there’s a distinction between those two things, although perhaps we’d be hard put to define exactly what we mean by “quality”.

In practice it’s very common for AI applications to treat popular as a synonym for good, such as the app which promised to rate your photos for quality, but ended up rating them based on the extent to which they resembled an attractive 20-something white woman.

AI algorithms, fed on data hoovered up without sufficient care, regularly make decisions which are racist, sexist, or simply socially inept. Why?

“Computer systems are proxies for the people who made them.”

Not that the technochauvinists themselves are racist, sexist, or stupid; but there’s no question that the kinds of people who are penalised by these problems are not adequately represented.

“In order to create a more just technological world, we need more diverse voices at the table when we create technology.”

Machines without humans

The other point about all that data gathered up by humans, is the amount of work that goes into it. Where the data exists, great, why not make use of it. But don’t pretend that machine learning can operate in a vacuum without all the human-generated data. As Broussard comments on the headline-grabbing AlphaGo algorithm:

“Millions of hours of human labor went into creating the training data – yet most versions of the AphaGo story focus on the magic of the algorithms, not the humans who invisibly and over the course of years worked (without compensation) to create the training data.”

We’re in such a hurry to heap praise onto the robots that we sometimes forget to give ourselves enough credit. Broussard gives the example of a tool as everyday as Google, which to a large extent works as well as it does because we have learned how to use it well. Googling effectively is a skill, and that’s a really good example of how the best technology solutions come about from fusing together the complementary skills of humans and computers.

Very much the same principle is likely to apply to the world of autonomous vehicles, which (as Francois Chollet alluded to) seem in many ways as far away as ever.

 “The machine-learning approach is great for routine tasks inside a formal universe of symbols. It’s not great for operating a two-ton killing machine on streets that are teeming with gloriously unpredictable masses of people.”

In customer service terms, this tends to come to the forefront in allowing robots (or self-serve) to deal with the bulk of relatively simple enquiries, but allowing humans to handle the complex stuff. If we assume the computer can handle everything, the consequences will be ugly.

“The edge cases require hand curation. You need to build in human effort for the edge cases, or they won’t get done.”

The phrase “edge cases” can itself be quite damaging, I think. I love this tweet from the designer Mike Monteiro:

“When someone starts flapping their gums about edge cases they are telling you who they’re willing to hurt to make money. In 20+ years in this business I've never seen an edge case that contained cis white boys like me.” 

No one ever thinks of themselves as an edge case, do they?

Conclusion

The Hollywood vision of AI coupled with technochauvinism has led to the rushed deployment of machine learning approaches, launched with hyperbolic claims, that are simply not delivering.

“…we are so enthusiastic about using technology for everything…that we stopped demanding that our new technology is good.”

That’s not to say that AI doesn’t have potential, it does, but it makes a lot more sense to see it as a tool that humans can use, rather than as an autonomous agent that can step in to replace human decision-making in most cases.

“We should really focus on making human-assistance systems instead of human-replacement systems.”

I think we’re far better off thinking of autonomous vehicles as cruise control+, rather than as self-driving cars, and of chatbots as FAQ+, rather than as a replacement for your human contact-centre agents. As Broussard says,

“…computers are very good at some things and very bad at others, and social problems arise from situations in which people misjudge how suitable a computer is for performing the task.”

Stephen Hampshire

Client Manager
TLF Research
stephenhampshire@leadershipfactor.com