We don't want AI that can understand us – we'd only end up arguing

August 21, 2017 by Constantine Sandis And Richard Harper, The Conversation
Credit: Shutterstock

Forget the Turing test. Computing pioneer Alan Turing's most pertinent thoughts on machine intelligence come from a neglected paragraph of the same paper that first proposed his famous test for whether a computer could be considered as smart as a human.

"The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Turing's 1950 prediction was not that computers would be able to think in the future. He was arguing that, one day, what we mean when we talk about computers thinking would morph in such a way that it would become a pretty uncontroversial thing to say. We can now see that he was right. Our use of the term has indeed loosened to the point that attributing thought to even the most basic of machines has become common parlance.

Today, advances in technology mean that understanding has become the new thought. And again, the question of whether machines can understand is arguably meaningless. With the development of and machine learning, there already exists a solid sense in which robots and artificial assistants such as Microsoft's Cortana and Apple's Siri are said to understand us. The interesting questions are just what this sense is and why it matters what we call it.

Defining understanding

Deciding on how to define a concept is not the same as making a discovery. It's a pragmatic choice (usually) based on empirical observations. We no more discover that machines think or understand than we discover that Pluto isn't a planet.

In the case of artificial intelligence, people often talk of 20th-century science fiction writers such as Isaac Asimov as having had prophetic visions of the future. But they didn't so much anticipate the thought and language of contemporary computing technology as directly influence it. Asimov's Three Laws of Robotics have been an inspiration to a whole generation of engineers and designers who talk about machines that learn, understand, make decisions, have emotional intelligence, are empathetic and even doubt themselves.

This vision enchants us into forgetting the other possible ways of thinking about artificial intelligence, gradually eroding the nuance in our definitions. Is this outweighed by what we gain from Asimov's vocabulary? The answer depends on why we might want understanding between humans and machines in the first place. To handle this question we must, naturally, first turn to bees.

As the philosopher of language Jonathan Bennett writes, we can talk about bees having a "language" they use to "understand" each other's "reports" of discoveries of food. And there is a sense in which we can speak – without quote marks even – of bees having thought, language, communication, and understanding and other qualities we usually think of as particularly human. But think what a giant mess the whole process would be if they were also able to question each other's motives, grow jealous, become resentful, and so on like humans.

A similar disaster would occur if our sat-nav devices started bickering with us, like an unhappy couple on holiday, over the best route to our chosen destination. The ability to understand can seriously interfere with performance. A good hoover doesn't need to understand why I need more powerful suction in order for it to switch to turbo mode when I press the appropriate button. Why should a good robot be any different?

Understanding isn't (usually) helpful

One of key things that makes artificial personal assistants such as Amazon's Alexa useful is precisely the fact that our interactions with them could never justify reactive attitudes on either side. This is because they are not the sort of beings that could care or be cared about. (We may occasionally feel anger towards a machine but it is misplaced.)

We need the assistant's software to have accurate voice-recognition and be as sensitive to the context of our words as possible. But we hardly want it to be capable of understanding – and so also misunderstanding – us in the everyday ways that could produce mutual resentment, blame, gratitude, guilt, indignation, or pride.

Only a masochist would want an artificial PA that could fall out with her, go on strike, or refuse to update its software.

The only exception in which we might conceivably seek such understanding is in the provision of artificial companions for the elderly. As cognitive scientist Maggie Boden warns, it is emotionally dangerous to provide care-bots that cannot actually care but that people could become deeply attached to.

The aim of AI that understands us as well (or as badly) as we understand one another sounds rather grand and important, perhaps the major scientific challenge of the 21st century. But what would be the point of it? We would do better to focus on the other side of the same coin and work towards having a less anthropocentric understanding of AI itself. The better we can comprehend the way AI reasons, the more useful it will be to us.

Explore further: TED: Smart machines to recover lost memories, mind your children

Related Stories

How to engineer intelligence

March 20, 2012

"Do we actually want machines to interact with humans in an emotional way? Will it be possible for them to interact with us?"

Musk, Zuckerberg duel over artificial intelligence

July 25, 2017

Visionary entrepreneur Elon Musk and Facebook chief Mark Zuckerberg were trading jabs on social media over artificial intelligence this week in a debate that has turned personal between the two technology luminaries.

Recommended for you

Engineered metasurfaces reflect waves in unusual directions

February 18, 2019

In our daily lives, we can find many examples of manipulation of reflected waves, such as mirrors, or reflective surfaces for sound that improve auditorium acoustics. When a wave impinges on a reflective surface with a certain ...

Sound waves let quantum systems 'talk' to one another

February 18, 2019

Researchers at the University of Chicago and Argonne National Laboratory have invented an innovative way for different types of quantum technology to "talk" to each other using sound. The study, published Feb. 11 in Nature ...

Solid-state catalysis: Fluctuations clear the way

February 18, 2019

The use of efficient catalytic agents is what makes many technical procedures feasible in the first place. Indeed, synthesis of more than 80 percent of the products generated in the chemical industry requires the input of ...

Design principles for peroxidase-mimicking nanozymes

February 18, 2019

Nanozymes, enzyme-like catalytic nanomaterials, are considered to be the next generation of enzyme mimics because they not only overcome natural enzymes' intrinsic limitations, but also possess unique properties in comparison ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Aug 21, 2017
The problem of language is that you cannot define reality. The machines do not -actually- think or understand, even if we say they do because we've shifted the meaning of the words.

The problem is that we still retain both uses of the words "think" and "understand", and this gives an entirely wrong perspective on thinking and understanding machines: we have created a semantic illusion where the unthinking, non-understanding, machine is expected and believed to excel in tasks that require understanding and/or thinking, simply because we confuse ourselves.

It's like observing that a tree has a bark, and a dog has a bark, so planting a tree in your yard should be just as effective in chasing away burglars.

You can see the effect in all the people who go around calling, "Computers are better drivers than humans!" Anyone developing these systems know they aren't - not yet by a long shot.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.