When the Tongue Slips, the Eyes Have It

Jan 20, 2005

How is it that we can look at a door and accidentally call it a window or call a shovel a rake? When people mislabel objects, they often blame themselves for rushing their words or not paying attention. But research at the Georgia Institute of Technology, published in the December issue of Psychological Science, suggests the mistakes may have less to do with concentration than previously thought. The findings provide an insight into how the brain organizes speech and suggests that when the tongue slips, the eyes may be the best window into a speaker’s intent.

“People typically look at objects before naming them, it’s part of the way they plan the words they are going to say, said Zenzi Griffin, assistant professor of psychology at Georgia Tech. “So, if people are rushing or being inattentive you might expect if they made an error that they spent less time looking at the object. But I found almost no difference in the amount of time people spent looking at an object when they made an error compared to when they didn’t. In fact, people who made an error spent slightly more time looking at the object.”

In the study, Griffin asked participants to name two or three line-drawn objects or describe the action in a scene, while she tracked their eye movements using video cameras outfitted with special software. She identified 41 full or partial speech errors uttered by 33 participants during eye-tracking experiments.

The results, said Griffin, show that at some level people know what they meant to say and that looking at the object doesn’t help to ensure that they will name it correctly. They also suggest that when a person makes a speech error, knowing what they are looking at may be more informative of their intentions than the words they say.

That may be useful to designers of speech recognition software, said Griffin. “Gaze can potentially provide clues to what uncertain words are - at least when people are talking about things in their immediate environment, like in a cockpit or an automobile,” she said. “Gaze can also help to disambiguate which object you are referring to, so if you say ‘Open the door,’ the software could know to which door you are referring.”

Source: Georgia Institute of Technology

Explore further: LEGO bricks build better mathematicians

add to favorites email to friend print save as pdf

Related Stories

Using 3D printers to print out self-learning robots

Nov 12, 2014

When the robots of the future are set to extract minerals from other planets, they need to be both self-learning and self-repairing. Researchers at Oslo University have already succeeded in producing self-instructing ...

What exactly is Google's 'cancer nanodetector'?

Nov 11, 2014

Last week, US tech giants Google made a splash in the media, announcing plans to develop new 'disease-detecting magnetic nanoparticles'. This was almost universally welcomed – after all, trying to detect ...

Recommended for you

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.