Eugene the Turing test-beating teenbot reveals more about humans than computers

Jun 10, 2014 by Anders Sandberg, The Conversation
The Turing test shows how ready we are to believe in thinking machines.

After years of trying, it looks like a chatbot has finally passed the Turing Test. Eugene Goostman, a computer program posing as a 13-year old Ukrainian boy, managed to convince 33% of judges that he was a human after having a series of brief conversations with them.

Most people misunderstand the Turing test, though. When Alan Turing wrote his famous paper on computing intelligence, the idea that machines could think in any way was totally alien to most people. Thinking – and hence intelligence – could only occur in human minds.

Turing's point was that we do not need to think about what is inside a system to judge whether it behaves intelligently. In his paper he explores how broadly a clever interlocutor can test the mind on the other side of a conversation by talking about anything from maths to chess, politics to puns, Shakespeare's poetry or childhood memories. In order to reliably imitate a human, the machine needs to be flexible and knowledgeable: for all practical purposes, intelligent.

The problem is that many people see the test as a measurement of a machine's ability to think. They miss that Turing was treating the test as a thought experiment: actually doing it might not reveal very useful information, while philosophising about it does tell us interesting things about intelligence and the way we see machines.

Some practical test results have given us food for thought. Turing seems to have overestimated how good an intelligent judge would be at telling humans and machines apart.

Joseph Weizenbaum's 1964 program ELIZA was a parody of a psychotherapist, bouncing back responses at the person it was talking to interspersed with random sentences like "I see. Please go on." Weizenbaum was disturbed by how many people were willing to divulge personal feelings to what was little more than an echo chamber, even when fully aware that the program had no understanding or emotions. This Eliza effect, where we infer understanding and mental qualities from mere strings of symbols is both a bane and a boon to .

"Eugene Goostman" clearly exploits the Eliza effect by pretending to be a Ukrainian 13-year old. Like most successful chatobots, Eugene manages the discussion so as to avoid certain topics. He might not have any information about a certain historical event or a place so he would divert the conversation onto something else if asked about them.

A real 13-year- old could probably solve simple logic problems, while Eugene could not, so if asked to solve a problem, the program would refuse to participate. But Eugene is posing as a teenager so it is perfectly plausible that he too, might refuse to participate if he were a real human, in a sign of the recalcitrance typical for his age.

The real art here – and it is well worth recognising that it takes skill to develop systems like this – lies in constructing the right kind of social interactions and responses that manipulate the judge into thinking and acting in certain ways. True intelligence could be helpful, but social skill is probably far more powerful. Eugene doesn't need to know everything because a teenager wouldn't know everything and can behave in a certain way without arousing suspicion. He'd probably have had a harder time convincing the judges if he had said he was a 50-year-old university professor.

Why do we fall for it so easily? It might simply be that we have evolved with an inbuilt folk psychology that makes us believe that agents think, are conscious, make moral decisions and have free will. Philosophers will happily argue that these things do not necessarily imply each other, but experiments show that people tend to think that if something is conscious it will be morally responsible (even if it is a deterministic robot).

It is hard to conceive of a human-like agent without consciousness but with moral agency, so we tend to ascribe agency and free will to anything that looks conscious. It might just be the presence of eyes, or an ability to talk back, or any other tricks of human-likeness.

So Eugene's success in the Turing test may tell us more about how weak we humans are when it comes to detecting intelligence and agency in conversation than about how smart our machines are.

We spend much of our time behaving like chatbots anyway. We react habitually to our environment, much of our conversation consists of canned responses or reflections of what the previous speaker said. The total amount of actual intelligent decisions we make over a day is probably rather small. That is not necessarily bad: a smart being will minimise effort because constantly thinking up entirely new solutions to problems is wasteful.

We should expect descendants of Eugene Goostman to show up in our social environment more and more. The real question is not whether they can think, but what other systems they are connected to. If we play the technological game well, we might create vast systems of software and people that are smarter than their components. Some doubt whether they could actually think, but if they act smart and we benefit from them, do we really care?

Explore further: First Turing Test success marks milestone in computing history

add to favorites email to friend print save as pdf

Related Stories

Chatbot Eugene put to Turing test wins first prize

Jun 27, 2012

(Phys.org) -- Billed as the biggest Turing test ever staged, a contest took place on June 23 in the UK, yet another event commemorating the 100th anniversary of the birth of Alan Turing. The twist is that ...

Mitsuku chatbot has good answers for the Loebner Prize

Sep 17, 2013

(Phys.org) —A chatbot named Mitsuku has won the Loebner Prize 2013, announced over the weekend, beating out three other contestants for the top prize of a bronze medal and $4,000. Mitsuku's creator is Steve ...

Logic in computer science

May 27, 2014

All men are mortal. Socrates is a man. Therefore, Socrates is mortal. Logical arguments like this one have been studied since antiquity. In the last few decades, however, logic research has changed considerably: ...

Alan Turing at 100

Sep 14, 2012

It is hard to overstate the importance of Alan Turing, the British mathematician who died in 1954. He was a hero in science, for one. Turing invented the concepts that underlie modern computers and artificial ...

Recommended for you

Oculus unveils new prototype VR headset

Sep 20, 2014

Oculus has unveiled a new prototype of its virtual reality headset. However, the VR company still isn't ready to release a consumer edition.

Who drives Alibaba's Taobao traffic—buyers or sellers?

Sep 18, 2014

As Chinese e-commerce firm Alibaba prepares for what could be the biggest IPO in history, University of Michigan professor Puneet Manchanda dug into its Taobao website data to help solve a lingering chicken-and-egg question.

Computerized emotion detector

Sep 16, 2014

Face recognition software measures various parameters in a mug shot, such as the distance between the person's eyes, the height from lip to top of their nose and various other metrics and then compares it with photos of people ...

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
not rated yet Jun 10, 2014
Turing's point was that we do not need to think about what is inside a system to judge whether it behaves intelligently.

for all practical purposes, intelligent.


But you still haven't defined intelligence, so all of that is just begging the question.

It is hard to conceive of a human-like agent without consciousness but with moral agency


Not really. Nominally speaking, moral agency means just the ability to make moral judgements by some moral principle and be accountable for it, which an unthinking machine could do given that someone programs it with a system of morality and a learning algorithm to adjust what it percieves as "right" and "wrong".

A dog is a moral agent in a limited sense as far as we condition it with rules intentionally and unintentionally, and punish it for breaking them rules. The moral accountability bit ultimate depends on just whether the dog or machine can change behaviour or if it's just doing what comes necessary.
coldinthesun
not rated yet Jun 10, 2014
Just a thought...
I find it all a bit scary looking down that road... at some point perhaps humans won't be able to discern the truth regardless whether the systems are truly intelligent or autonomous. When our machines begin to have the possibility of being 'actually' intelligent (perhaps more so than their creators), what their capacities and capabilities are, whether they are being honest with us (along with what that honesty means), and whether we can even discern their true extent of their actions, what they are learning, the consequences are unimaginable. The moment we can't tell the difference is the moment we lose certainty over control.
As technology progresses and increases in complexity perhaps the programming of expert systems becomes increasingly automated by other expert systems and driven by simulation and augmentation of biological systems.
Our technological advancement is growing faster than our ability to meaningly audit it.