When the Tongue Slips, the Eyes Have It

January 20, 2005

How is it that we can look at a door and accidentally call it a window or call a shovel a rake? When people mislabel objects, they often blame themselves for rushing their words or not paying attention. But research at the Georgia Institute of Technology, published in the December issue of Psychological Science, suggests the mistakes may have less to do with concentration than previously thought. The findings provide an insight into how the brain organizes speech and suggests that when the tongue slips, the eyes may be the best window into a speaker’s intent.

“People typically look at objects before naming them, it’s part of the way they plan the words they are going to say, said Zenzi Griffin, assistant professor of psychology at Georgia Tech. “So, if people are rushing or being inattentive you might expect if they made an error that they spent less time looking at the object. But I found almost no difference in the amount of time people spent looking at an object when they made an error compared to when they didn’t. In fact, people who made an error spent slightly more time looking at the object.”

In the study, Griffin asked participants to name two or three line-drawn objects or describe the action in a scene, while she tracked their eye movements using video cameras outfitted with special software. She identified 41 full or partial speech errors uttered by 33 participants during eye-tracking experiments.

The results, said Griffin, show that at some level people know what they meant to say and that looking at the object doesn’t help to ensure that they will name it correctly. They also suggest that when a person makes a speech error, knowing what they are looking at may be more informative of their intentions than the words they say.

That may be useful to designers of speech recognition software, said Griffin. “Gaze can potentially provide clues to what uncertain words are - at least when people are talking about things in their immediate environment, like in a cockpit or an automobile,” she said. “Gaze can also help to disambiguate which object you are referring to, so if you say ‘Open the door,’ the software could know to which door you are referring.”

Source: Georgia Institute of Technology

Explore further: The future of 3D printing lies in space and with an extra dimension

Related Stories

SemanticPaint system labels environment quickly online

July 3, 2015

Ten researchers from University of Oxford, Microsoft Research Cambridge, Stanford, and Nankai University have presented a new approach to 3D scene understanding with a system which they dubbed SemanticPaint. "Our system offers ...

How could we destroy the moon?

July 17, 2015

In the immortal words of Mr. Burns, "ever since the beginning of time, man has wished to destroy the sun." Your days are numbered, sun.

Naming features on Pluto

July 14, 2015

'Here be Dragons…' read the inscriptions of old maps used by early seafaring explorers. Such maps were crude, and often wildly inaccurate.

Recommended for you

First detection of lithium from an exploding star

July 29, 2015

The chemical element lithium has been found for the first time in material ejected by a nova. Observations of Nova Centauri 2013 made using telescopes at ESO's La Silla Observatory, and near Santiago in Chile, help to explain ...

New names and insights at Ceres

July 29, 2015

Colorful new maps of Ceres, based on data from NASA's Dawn spacecraft, showcase a diverse topography, with height differences between crater bottoms and mountain peaks as great as 9 miles (15 kilometers).

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.