Will computers ever truly understand what we're saying?

January 11, 2016
Will computers ever truly understand what we're saying?
As two people conversing rely more and more on previously shared concepts, the same area of their brains -- the right superior temporal gyrus -- becomes more active (blue is activity in communicator, orange is activity in interpreter). This suggests that this brain region is key to mutual understanding as people continually update their shared understanding of the context of the conversation to improve mutual understanding. Credit: Arjen Stolk, UC Berkeley

From Apple's Siri to Honda's robot Asimo, machines seem to be getting better and better at communicating with humans.

But some neuroscientists caution that today's computers will never truly understand what we're saying because they do not take into account the context of a conversation the way people do.

Specifically, say University of California, Berkeley, postdoctoral fellow Arjen Stolk and his Dutch colleagues, machines don't develop a shared understanding of the people, place and situation - often including a long social history - that is key to human . Without such common ground, a computer cannot help but be confused.

"People tend to think of communication as an exchange of linguistic signs or gestures, forgetting that much of communication is about the social context, about who you are communicating with," Stolk said.

The word "bank," for example, would be interpreted one way if you're holding a credit card but a different way if you're holding a fishing pole. Without context, making a "V" with two fingers could mean victory, the number two, or "these are the two fingers I broke."

"All these subtleties are quite crucial to understanding one another," Stolk said, perhaps more so than the words and signals that computers and many neuroscientists focus on as the key to communication. "In fact, we can understand one another without language, without words and signs that already have a shared meaning."

A game in which players try to communicate the rules without talking or even seeing one another helps neuroscientists isolate the parts of the brain responsible for mutual understanding. Credit: Arjen Stolk, UC Berkeley

Babies and parents, not to mention strangers lacking a common language, communicate effectively all the time, based solely on gestures and a shared context they build up over even a short time.

Stolk argues that scientists and engineers should focus more on the contextual aspects of mutual understanding, basing his argument on experimental evidence from that humans achieve nonverbal mutual understanding using unique computational and neural mechanisms. Some of the studies Stolk has conducted suggest that a breakdown in mutual understanding is behind social disorders such as autism.

"This shift in understanding how people communicate without any need for language provides a new theoretical and empirical foundation for understanding normal social communication, and provides a new window into understanding and treating disorders of social communication in neurological and neurodevelopmental disorders," said Dr. Robert Knight, a UC Berkeley professor of psychology in the campus's Helen Wills Neuroscience Institute and a professor of neurology and neurosurgery at UCSF.

Stolk and his colleagues discuss the importance of conceptual alignment for mutual understanding in an opinion piece appearing Jan. 11 in the journal Trends in Cognitive Sciences.

Brain scans pinpoint site for 'meeting of minds'

To explore how brains achieve mutual understanding, Stolk created a game that requires two players to communicate the rules to each other solely by game movements, without talking or even seeing one another, eliminating the influence of language or gesture. He then placed both players in an fMRI (functional magnetic resonance imager) and scanned their brains as they nonverbally communicated with one another via computer.

He found that the same regions of the brain - located in the poorly understood right temporal lobe, just above the ear - became active in both players during attempts to communicate the rules of the game. Critically, the of the right temporal lobe maintained a steady, baseline activity throughout the game but became more active when one player suddenly understood what the other player was trying to communicate. The brain's right hemisphere is more involved in abstract thought and social interactions than the left hemisphere.

"These regions in the right temporal lobe increase in activity the moment you establish a shared meaning for something, but not when you communicate a signal," Stolk said. "The better the players got at understanding each other, the more active this region became."

This means that both players are building a similar conceptual framework in the same area of the brain, constantly testing one another to make sure their concepts align, and updating only when new information changes that mutual understanding. The results were reported in 2014 in the Proceedings of the National Academy of Sciences.

"It is surprising," said Stolk, "that for both the communicator, who has static input while she is planning her move, and the addressee, who is observing dynamic visual input during the game, the same region of the brain becomes more active over the course of the experiment as they improve their mutual understanding."

Robots' statistical reasoning

Robots and computers, on the other hand, converse based on a statistical analysis of a word's meaning, Stolk said. If you usually use the word "bank" to mean a place to cash a check, then that will be the assumed meaning in a conversation, even when the conversation is about fishing.

"Apple's Siri focuses on statistical regularities, but communication is not about statistical regularities," he said. "Statistical regularities may get you far, but it is not how the brain does it. In order for computers to communicate with us, they would need a cognitive architecture that continuously captures and updates the conceptual space shared with their communication partner during a conversation."

Hypothetically, such a dynamic conceptual framework would allow computers to resolve the intrinsically ambiguous communication signals produced by a real person, including drawing upon information stored years earlier.

Stolk's studies have pinpointed other brain areas critical to mutual understanding. In a 2014 study, he used brain stimulation to disrupt a rear portion of the temporal lobe and found that it is important for integrating incoming signals with knowledge from previous interactions. A later study found that in patients with damage to the frontal lobe (the ventromedial prefrontal cortex), decisions to communicate are no longer fine-tuned to stored knowledge about an addressee. Both studies could explain why such patients appear socially awkward in everyday social interactions.

Stolk plans future studies with Knight using fine-tuned brain mapping on the actual surfaces of the brains of volunteers, so-called electrocorticography.

Stolk said he wrote the new paper in hopes of moving the study of communication to a new level with a focus on conceptual alignment.

"Most cognitive neuroscientists focus on the signals themselves, on the words, gestures and their statistical relationships, ignoring the underlying conceptual ability that we use during communication and the flexibility of everyday life," he said. "Language is very helpful, but it is a tool for communication, it is not communication per se. By focusing on language, you may be focusing on the tool, not on the underlying mechanism, the cognitive architecture we have in our that helps us to communicate."

Explore further: Scientists discover neural communication pathways for prosody

Related Stories

A network of artificial neurons learns to use human language

November 11, 2015

A group of researchers from the University of Sassari (Italy) and the University of Plymouth (UK) has developed a cognitive model, made up of two million interconnected artificial neurons, able to learn to communicate using ...

Recommended for you

Swiss unveil stratospheric solar plane

December 7, 2016

Just months after two Swiss pilots completed a historic round-the-world trip in a Sun-powered plane, another Swiss adventurer on Wednesday unveiled a solar plane aimed at reaching the stratosphere.

Solar panels repay their energy 'debt': study

December 6, 2016

The climate-friendly electricity generated by solar panels in the past 40 years has all but cancelled out the polluting energy used to produce them, a study said Tuesday.

Wall-jumping robot is most vertically agile ever built

December 6, 2016

Roboticists at UC Berkeley have designed a small robot that can leap into the air and then spring off a wall, or perform multiple vertical jumps in a row, resulting in the highest robotic vertical jumping agility ever recorded. ...

16 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

gwrede
5 / 5 (2) Jan 11, 2016
As an Aspie, I don't think people really understand each other all that well. But still, it's only a matter of time before a computer understands you better than 50% of people around you. I really expect that to happen within the 20 years I hope to live.

Of course, the very definition of Understanding will be redefined by both sides to their purposes, but that will end when we have a complete simulation of the brain, with a complete (if entirely fabricated) life history as memories. Alas, that will be beyond my days. My guess is 30 to 40 years, so my children wil surely see the day.

Such a simulation is a nice Academic excercise, but we really don't need such an entity. We need intelligent and logical thinking, without the reptilian and beasty parts. We need unselfish entities, capable of doing the right thing.
TheGhostofOtto1923
3.7 / 5 (3) Jan 11, 2016
Computers will understand what we say better than we do. Further they will tell us what it is we are really trying to say which most of the time is overcomplex elaboration on our desire to survive long enough to reproduce.

It is serendipity that computers can have infinite patience.
axemaster
4.3 / 5 (6) Jan 11, 2016
The human brain is a computer. Therefore, the question "will computers ever truly understand what we're saying?" is already answered.
Hyperfuzzy
5 / 5 (1) Jan 11, 2016
Humans get either a "little" reward with the physical sensation to help choose direction and a computer has no reward response. However, how different does our response to "yes" and "no", may be no different. So should we give the software a healthy "yes" as the target of a given computation, but then you would be required to teach it morality. Why worry if we never give up control? It can go either way, "+" or "-"!
betterexists
not rated yet Jan 11, 2016
Google Translate Dept should hire MORE Researchers for ALL Languages.
We should end up listening to ANY Language & UNDERSTAND just like the Native Speaker can!
Actually, This is a Federal Issue.
In what way does a Private Co benefit from it?
kochevnik
1 / 5 (2) Jan 12, 2016
Computers are not alive. Science does not understand life so this question remains unanswered until a paradigm shift occurs in scientific community, which is still largely ensconced in reductionism
viko_mx
not rated yet Jan 12, 2016
"Will computers ever truly understand what we're saying?"

–źbsolutely not.
kochevnik
not rated yet Jan 13, 2016
The human brain is a computer. Therefore, the question "will computers ever truly understand what we're saying?" is already answered.
A quantum computer, with additions
promile
Jan 13, 2016
This comment has been removed by a moderator.
viko_mx
1 / 5 (1) Jan 13, 2016
The most perfect creators create less perfected than themselves creatures or machines. The opposite is not possible due to lack of sufficient information and intellectual power. The hierarchy of possibilities is top down.
kochevnik
not rated yet Jan 13, 2016
The most perfect creators create less perfected than themselves creatures or machines. The opposite is not possible due to lack of sufficient information and intellectual power. The hierarchy of possibilities is top down.
Actually the mind is perfect, although the brain is only an antenna. Defects are removed when charge is perfectly aligned as in plant phototaxis. Of course any material alignment with a perfect template will have defects, due to many things including inertia and spin states (information)
SkyLy
not rated yet Jan 14, 2016
The most perfect creators create less perfected than themselves creatures or machines. The opposite is not possible due to lack of sufficient information and intellectual power. The hierarchy of possibilities is top down.


Such a retarded comment. Evolution is not top-down unless you consider a bacteria equal to a human being because they share the same genetic material, but this classification is useless at most. And we're on the brink of upgrading evolution with computing and human intelligence.

By the way, journalists who add "never" to their articles only for buzz should go to hell.
Whydening Gyre
not rated yet Jan 14, 2016
The most perfect creators create less perfected than themselves creatures or machines. The opposite is not possible due to lack of sufficient information and intellectual power. The hierarchy of possibilities is top down.

You need to restructure your concept of hierarchy...
baudrunner
not rated yet Jan 16, 2016
Humans use inflection in speech to infer meaning. Computers can be programmed to detect that, but I doubt we're very far at programming the recognition of a person's intrinsic nuances and subtleties in that regard. Computers are being programmed to interpret all the languages, but, judging from the state of language translators on the web, I doubt that we're anywhere near to making them a better communicator than a human being. Then, there's reason and intuition. A computer might go way off topic if it were to attempt to apply those in a conversation.
kochevnik
not rated yet Jan 16, 2016
Meaning is generated from values on behaviors. Computers have no behaviors they have scripts. Even self-replicating scripts are not behavior. Living things heterodyne waveforms into self/antiself. Just watch a child eat new foods. His sounds of pleasure or distaste are universal spectra and originate from coherent gestalt wave harmonics. That is the pretext to behavior, not scripting. Scripts cannot be folded upon themselves without being destroyed. Hence scripts can never achieve sentience
promile
Jan 16, 2016
This comment has been removed by a moderator.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.