June has been the cruelest month for Artificial Intelligence. This month, a computer program beat the Turning Test—and thereby invalidated the Turing Test.
The Turing Test, for those who don’t know, is a test based on a premise by Alan Turing, the computational godfather. The idea: if a computer can be mistaken for a human being, by other human beings, in conversation, then it is functionally “intelligent”—the only way we have to measure intelligence.
For decades, there have been contests in which judges try to determine whether the person on “the other side” of a computer they’re talking to is in fact a human being or a computer program. The computers have never been very convincing: to “win” the Turing Test, they’ve needed to fool just 30 percent of the human judges—and never did it until now.
Meanwhile, the conditions of the Turing Test have become the conditions we live with every day: talking to machines and invisible interlocutors across a social media landscape often dominated by chatbots, algorithms, and PR teams posing as celebrities. What were supposed to be highly artificial constraints have become nature and nurture to the entire first world.
Now a chatbot has finally passed the Turning Test by pretending to be a 13-year-old Ukrainian boy—a non-English speaker who could be (and was) excused for making non-sequiturs and not understanding basic questions.
It managed to fool 33 percent of the judges: a technical win that has impressed nobody.
But leave the question of whether a faux-Ukrainian teenager was ever the interlocutor Alan Turing had it mind (it wasn’t). The more important point is this: the machine that first “passed” the test wasn’t a cutting edge neural network carefully modeled off the human brain—it was a chatbot. A cheap version of the software low-end companies use to manage their Twitter accounts.
It may have passed the Turing Test, but nobody actually believes it’s “thinking” in any sense of the term.
So while the Turing Test may have had a useful place in AI history, and be good at getting headlines, we’ve definitively determined that a machine that absolutely does not think can achieve a false positive.
The test is no good—and perhaps the advocates of hard AI should rethink the notion that being able to function among beings who think is the same as thinking. The Turing Test only ignores interiority: its disciples claim no such thing exists. This is the most willful kind of ignorance. Interiority is what we mean by thinking: the fundamental ground on which all else depends.
As for why a bot like this one could fool a third of the people some of the time, essayists from Andrew Leonard to Brian Christian have hit the bot on the head: it’s not that computers are getting smarter, it’s that we’re getting dumber.
This is a theme expanded upon before in The New Existentialists: our lives are now lived on technology’s time table, technology’s speed, and we focus our economic forces on what technology is good at. We are being trained to think more and more like machines—and a sadly inevitable result is that our expectation of humanity get lower and lower.
The purpose of so much of our society ought to be to bring out the best in our humanity—nevermind what the machines can do. Are we sure that a third of the people can connect with a complete stranger some of the time?
— Benjamin Wachs
Read more stories by Benjamin Wachs
Keep up with our community – follow us on Facebook and Twitter