What you see affects what you hear (Videos)

Mar 04, 2009

Understanding what a friend is saying in the hubbub of a noisy party can present a challenge - unless you can see the friend's face.

New research from Baylor College of Medicine in Houston and the City College of New York shows that the visual information you absorb when you see can improve your understanding of the spoken words by as much as sixfold.

Your brain uses the visual information derived from the person's face and lip movements to help you interpret what you hear, and this benefit increases when the sound quality rises to moderately noisy, said Dr. Wei Ji Ma, assistant professor of neuroscience at BCM and the report's lead author, in a report that appears online today in the open access journal PLoS ONE.

This video is not supported by your browser at this time.
Example of congruent AV stimuli (boot) - 12dB noise.

"Most people with normal hearing lip-read very well, even though they don't think so," said Ma. "At certain noise levels, lip-reading can increase word recognition performance from 10 to 60 percent correct."

However, when the environment is very noisy or when the voice you are trying to understand is very faint, lip-reading is difficult.

This video is not supported by your browser at this time.
Examples of congruent AV* stimuli (cheap) - 12dB noise

"We find that a minimum sound level is needed for lip-reading to be most effective," said Ma.

This research is the first to study word recognition in a natural setting, where people report freely what they believe is being said. Previous experiments only used limited lists of words for people to choose from.

The lip-reading data help scientists understand how the brain integrates two different kinds of stimuli to come to a conclusion.

Ma and his colleagues constructed a mathematical model that allowed them to predict how successful a person will be at integrating the visual and auditory information.

People actually combine the two stimuli close to optimally, Ma said. What they perceive depends on the reliability of the stimuli.

"Suppose you are a detective," he said. "You have two witnesses to a crime. One is very precise and believable. The other one is not as believable. You take information from both and weigh the believability of each in your determination of what happened."

In a way, lip-reading involves the same kind of integration of information in the brain, he said.

In experiments, videos of individuals were shown in which a person said a word. If the person is presented normally, the visual information provides a great benefit when it is integrated with the auditory information, especially when there is moderate background noise. Surprisingly, if the person is just a "cartoon" that does not truly mouth the word, then the visual information is still helpful, though not as much.

In another study, the person mouths one word but the audio projects another, and often the brain integrates the two stimuli into a totally different perceived word.

"The mathematical model can predict how often the person will understand the word correctly in all these contexts," Ma said.

More information: Wei Ji Ma, Xiang Zhou, Lars A. Ross, John J. Foxe, Lucas C. Parra, " Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space," PLoS ONE, in press, to appear March 2009. dx.plos.org/10.1371/journal.pone.0004638

Source: Baylor College of Medicine

Explore further: Family income, parental education related to brain structure in children and adolescents

Related Stories

Build your own Siri: An open-source digital assistant

Mar 11, 2015

An open-source computing system you command with your voice like Apple's Siri is designed to spark a new generation of "intelligent personal assistants" for wearables and other devices. It could also lead to much-needed advancements ...

Robots do kitchen duty with cooking video dataset

Jan 05, 2015

Now that we have robots that walk, gesture and talk, roboticists are interested in a next level: How can they learn more than they already know? The ability of these machines to learn actions from human demonstrations ...

GPS-loaded helmet offers easier trip for motorcyclists

Dec 30, 2014

An Android-based motorcycle helmet with GPS and voice-control has won the confidence of project supporters. The company is preparing to roll out its creation next year. Russian startup Livemap is behind this ...

Superfish points fingers over ad software security flaws

Feb 22, 2015

A little-known Silicon Valley startup was caught in a firestorm of criticism this week for making software that exposed Lenovo laptop users to hackers bent on stealing personal information. But Superfish Inc. ...

Recommended for you

Researchers create 'Wikipedia' for neurons

36 minutes ago

The decades worth of data that has been collected about the billions of neurons in the brain is astounding. To help scientists make sense of this "brain big data," researchers at Carnegie Mellon University ...

'Lightning bolts' in the brain show learning in action

5 hours ago

Researchers at NYU Langone Medical Center have captured images of the underlying biological activity within brain cells and their tree-like extensions, or dendrites, in mice that show how their brains sort, ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.