
Imagine conversing with two persons, person A and person B, knowing that one of them is a computer and the other a human but not being able to tell which is which before the test. You start the conversation and ask a series of questions, waiting for the response. You either judge that they’re satisfying enough to be from a human being, just like you, or you judge that they’re not and deduce that they’re from a computer, all of this based on the quality of the reply and of your judgment. You may be right, however, you may also get it wrong.
That’s what we call the Turing test, named after Alan Turing, an eminent British mathematician and cryptographer responsible for breaking the Nazi Enigma code during World War II, giving the Allies the edge they needed to win the war in Europe, which led to the creation of the computer. The question raised by this experiment is the following: “Can a machine think?”.
To answer that complex interrogation, Turing created this test in 1950 dubbed "the imitation game" by himself. It’s a way of determining a machine's intelligence and its ability to “think”. A computer program must pass the test by convincingly impersonating a human in a textual dialogue with a human judge in real-time, such that the human judge cannot reliably discern between the program and a genuine human. According to the test, if a computer can respond to any question presented to it in the same language as a human, we can assume that the machine is capable of intelligent thought.
That evaluation is passed when a computer is mistaken for a person more than 30% of the time over a series of five-minute keyboard interactions. To this day, only one AI partly succeeded in this controversial experiment. In 2014, during an event organized by the University of Reading, Eugene Goostman, a computer program that pretended to be a 13-year-old Ukrainian kid, made news after claiming to have passed the Turing test. The bot persuaded 33% of the human judges that it was a real person. Nonetheless, because there were just three judges, only one was duped, not exactly a big outcome. Another issue was that by presenting the chatbot as a 13-year-old child, judges would overlook nonsensical phrases and evident errors, blaming it on the chatbot's limited English abilities and young age. Other AI who tried it fooled the human judge by mimicking a psychologist, reflecting their questions at them (like ELIZA in 1966 or PARRY in 1972).
And Siri, you may ask. Did it pass the test? Well, so far it can only handle simple statements and brief phrases and cannot hold a full-fledged conversation. Siri is readily recognized and does not appear to be fully human. This may improve in the future, but for the moment, Siri fails miserably.
The fact that today’s computers can solve massive equations, perform delicate surgery but still struggle with the human language is kind of scary and exciting at the same time.
But there’s one question that should scare us more than the previous ones: if AI turns out to be more intelligent and conscious of itself, what would we do if it failed the Turing test intentionally to fool us?
Sources:
Comments