skip to Main Content

Garbage In, Garbage Out: The Amplification of Our Errors Through AI A review of The AI Delusion by Gary Smith

There’s a saying in programming: Computers don’t do what you want them to do, they do them what you tell them to do. When you write a program to perform an action of mindless repetition 10,000 times, the computer will execute your command quickly and perfectly. If you make a mistake in your codification of that action, the program will repeat your mistake 10,000 times, just as flawlessly. This dumb execution of actions is what makes coding so agonizingly frustrating at times. However, the promise of the field of Artificial Intelligence (AI), is that computers will no longer be dumb executionists, they will be essentially more powerful and improved versions of ourselves.

In his new book The AI Delusion, Gary Smith argues that we need to disabuse ourselves of this notion: machines are not, and cannot be, more “intelligent” than we are. AI is still just a form of obedience and mimicry, albeit more nuanced than traditional task execution. “Following rules is very different from the instinctive intelligence that humans acquire during their lifetimes,” he writes. “Human intelligence allows us to recognize cryptic language and distorted images, to understand why things happen, to react to unusual events, and so much more that would be beyond our grasp if we were mere rules-followers.” And, we place AI on a pedestal to our great peril.

Smith makes his case through various real world examples of AI, from the 2005 development of Watson, a computer designed to compete at the game of Jeopardy!, to the 2016 presidential election, with Hilary Clinton’s undue reliance on the processing of Big Data, to automated stock trades. He goes to great lengths to explain the mechanics behind these systems—which is to say, the reasoning errors that are codified into instructions, or the black box magic that often nobody fully understands—so that we’re better able to understand how the supposedly infallible algorithms are actually fertile grounds for flaws.

In TED talk The Era of Blind Faith in Big Data Must End, Cathy O’Neil says that most people think algorithms are objective and scientific. “That is a marketing trick,” she says. “Algorithms are opinions embedded in code.” To the extent that you can enter in some data, and have a program give you a concrete, unambiguous answer, algorithms give the impression that they are simply executing an act of neutral mathematics. Yet, if we give a machine data that exhibits a bias, for example, a company that consistently gives positive performance reviews to young, white men, it should come as no surprise that we train our algorithm to perpetuate that bias and to suggest hiring more young, white men. Garbage in, garbage out.

Because Smith is a professor of economics, and has written before on statistics and data, this is where he spends most of his attention. He writes about bad data, and common logical fallacies like mistaking correlation for causation. He discredits data mining entirely by showing various scenarios in which, yes, patterns can be found, but entirely spurious correlations. He demonstrates that patterns can indeed be found by merely injecting random data like a coin toss—a dead horse he beats so far into the ground over several chapters that it doesn’t need to be buried.

It’s here in this hammering enthusiasm for explaining how people misunderstand and misuse statistics and the futile effort of trying to beat the stock market that Smith loses sight of his argument. He is so determined to run through the wide and deep statistical errors that people make—supposedly in service of illustrating how any AI based on this “logic” will inevitably be flawed if not outright dangerous—that what he ends up actually writing about is the failure of intelligence, period. Yes, he does offer some intermittent thoughts that show he appreciates human intelligence, at least in theory, such as pointing out that humans can realize they’ve made a mistake and correct it whereas a computer cannot and would not, but there is a strong undercurrent of smug frustration for all these people who don’t understand stats the way he does, for example referring at one point to “investment-guru wannabes.” It’s as though he’s forgotten that he’s meant to be writing about the superiority of human intelligence at all. The thrust of his book essentially becomes: “You’re so stupid you don’t know how intelligent you are.” Or vice versa, perhaps.

For a book about data, it is surprisingly anecdotal and personal. How human.

Regardless of the noise that effectively cripples Smith’s argument, there are some salvageable points. Most importantly, it’s critical to realize that we have not taught machines to think like we do, we have taught them to act as though they do.

For example, when Apple’s AI assistant Siri was released, many people were tickled with pleasure at her quippy comebacks to inquiries about her relationship status. But Siri didn’t come up with those quips, a person did. Moreover, “she” has no idea in any real sense what a “boyfriend” is beyond a string of characters that trigger a certain thread of ordained responses stored in databases. The only thing that’s effectively demonstrated is accurate obedience to a system of rules that mimic conversation. That matters, Smith says, because “to be intelligent, one has to understand what one is talking about.”

We exist in the “real world”—whatever that means—and create things like “data” in order to articulate and record the things we experience. Computers have access to data but not the real world, therefore they are only able to know what we tell them. To my mind it’s a twisted and severe version of Plato’s Allegory of the Cave, whereby we give machines a series of numbers through which to view the world. There is no ethical problem with this diminished reality because machines are instruments and not sentient creatures in caves, however it reaches a sick level of absurdity when we expect those machines to then draw better conclusions about the world than us.

Part of what makes turning our decision-making over to computers so compelling is our own discomfort with uncertainty. We try to understand what is happening, and what it all means, and we are afraid of making a mistake because we are mortals and must protect our bodies. Whereas we may waffle or remain uncertain about what’s best, computers present a definitive answer. Right or wrong, they make their way with certainty.

Computers help us in processing large amounts of information very quickly, have seemingly infinite and perfect memories, and can work endlessly without tiring. Those are, by human measures, god-like attributes, so it makes sense that we would eventually start to project omnipotence onto them. But this is the mistake: computers and AI are our creations, and we have made them in our image. Machines don’t have the ability to discern your intentions from your instructions; they’re inflexible and unimaginative. What we will of them, they amplify with emotionless abandon.

 


The AI Delusion by Gary Smith is published by Oxford University Press.

Katherine Oktober Matthews (www.oktobernight.com) is an artist and analyst based in The Netherlands. She writes and edits extensively in the field of art, is the author of Unique: Making Photographs in the Age of Ubiquity, and founder of Riding the Dragon.

Back To Top