Is the Turing Test Obsolete?

Is the Turing Test Obsolete?
Is the Turing Test Obsolete?

If one of the participants is a computer and one is human, and the judge cannot tell which of them is which, a computer is said to have passed the Turing test. The test has served as a pole star and long-term goal for AI researchers across the decades. Now, Rohit Prassad, VP and head scientist of Alexa, argues that the Turing test has us barking up the wrong set of trees. He writes in Fast Company:

I believe the goal put forth by Turing is not a useful one for AI scientists like myself to work toward. The Turing Test is fraught with limitations, some of which Turing himself debated in his seminal paper. With AI now ubiquitously integrated into our phones, cars, and homes, it’s become increasingly obvious that people care much more that their interactions with machines be useful, seamless and transparent—and that the concept of machines being indistinguishable from a human is out of touch.

Prassad is absolutely right that the Turing Test has acknowledged limitations. It tests whether a computer behaves like a human being, not whether a computer demonstrates something we might call “intelligence.” It puts constraints on the scenario that might require a computer to misrepresent the amount of time it took to complete a complex math equation, for example, in order to avoid being given away by its own performance. It’s also theoretically possible for a sufficiently advanced language processor to pass the Turing Test without possessing any of the characteristics people tend to think of when they imagine a machine that could pass the Turing Test.

Prassad argues that the question of “When will Alexa pass the Turing test?” doesn’t capture the actual value of Alexa very well. He points out that when Alan Turing wrote his seminal paper in 1950, the first commercial computer hadn’t even been sold yet, and that the Turing Test was never intended to serve as the ultimate test of artificial intelligence. He argues instead that we should build AIs that augment human intelligence and improve human lives “in a way that is both equitable and inclusive.”

He argues in favor of building devices and systems that align with the approach Amazon has taken with Alexa. Instead of attempting to pretend to be human, AI systems should focus on completing everyday tasks efficiently. Ultimately, such systems should combine human-like attributes with machine efficiency. This isn’t exactly a surprising opinion for a person in his position to hold. While I agree there’s no reason to regard the Turing Test as the method by which artificial intelligence should be evaluated, I’m less quick to dismiss it altogether. The Turing Test, as originally envisioned, requires that the computer being tested be capable of fooling a judge on any requested topic. In envisioning the kind of questions a computer might be expected to answer, Turing didn’t emphasize engineering or math questions. One example from the paper reads:

Interrogator: In the first line of your sonnet which reads, “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?

Witness: It wouldn’t scan.

Interrogator: How about “a winter’s day.” That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter’s day.

Turing doesn’t just imagine a computer that understands scansion. He imagined a computer that can correctly answer that being compared to a winter day is not a compliment when asked about the reasons why it chose its words. The Turing Test isn’t just a test of a computer’s ability to answer factual questions. It’s a test of a computer’s ability to provide human-equivalent answers to questions concerning its aesthetic sensibilities.

Even if the Turing Test is obsolete in certain respects, it touches on capabilities that have more in common with advancing Alexa and similar systems than Prassad gives it credit for. It may not be worth focusing an enormous amount of energy on specifically designing computers that can pass for human, but Turing’s thought experiment explicitly incorporates the idea of a computer that understands how to communicate nuance and can answer follow-up questions by coherently referencing its own sense of beauty.

Is that marketable the way an AI that can manage your calendar and email while screening your calls and playing media on-demand? Probably not. But it’s not worthless, either. Not even 70 years on.