What you see affects what you hear (Videos)

Mar 04, 2009

Understanding what a friend is saying in the hubbub of a noisy party can present a challenge - unless you can see the friend's face.

New research from Baylor College of Medicine in Houston and the City College of New York shows that the visual information you absorb when you see can improve your understanding of the spoken words by as much as sixfold.

Your brain uses the visual information derived from the person's face and lip movements to help you interpret what you hear, and this benefit increases when the sound quality rises to moderately noisy, said Dr. Wei Ji Ma, assistant professor of neuroscience at BCM and the report's lead author, in a report that appears online today in the open access journal PLoS ONE.

This video is not supported by your browser at this time.
Example of congruent AV stimuli (boot) - 12dB noise.

"Most people with normal hearing lip-read very well, even though they don't think so," said Ma. "At certain noise levels, lip-reading can increase word recognition performance from 10 to 60 percent correct."

However, when the environment is very noisy or when the voice you are trying to understand is very faint, lip-reading is difficult.

This video is not supported by your browser at this time.
Examples of congruent AV* stimuli (cheap) - 12dB noise

"We find that a minimum sound level is needed for lip-reading to be most effective," said Ma.

This research is the first to study word recognition in a natural setting, where people report freely what they believe is being said. Previous experiments only used limited lists of words for people to choose from.

The lip-reading data help scientists understand how the brain integrates two different kinds of stimuli to come to a conclusion.

Ma and his colleagues constructed a mathematical model that allowed them to predict how successful a person will be at integrating the visual and auditory information.

People actually combine the two stimuli close to optimally, Ma said. What they perceive depends on the reliability of the stimuli.

"Suppose you are a detective," he said. "You have two witnesses to a crime. One is very precise and believable. The other one is not as believable. You take information from both and weigh the believability of each in your determination of what happened."

In a way, lip-reading involves the same kind of integration of information in the brain, he said.

In experiments, videos of individuals were shown in which a person said a word. If the person is presented normally, the visual information provides a great benefit when it is integrated with the auditory information, especially when there is moderate background noise. Surprisingly, if the person is just a "cartoon" that does not truly mouth the word, then the visual information is still helpful, though not as much.

In another study, the person mouths one word but the audio projects another, and often the brain integrates the two stimuli into a totally different perceived word.

"The mathematical model can predict how often the person will understand the word correctly in all these contexts," Ma said.

More information: Wei Ji Ma, Xiang Zhou, Lars A. Ross, John J. Foxe, Lucas C. Parra, " Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space," PLoS ONE, in press, to appear March 2009. dx.plos.org/10.1371/journal.pone.0004638

Source: Baylor College of Medicine

Explore further: Know the brain, and its axons, by the clothes they wear

add to favorites email to friend print save as pdf

Related Stories

Sochi: Our tweeted emotions decrypted in real time

Feb 07, 2014

EPFL researchers will track emotions of the viewing public during the Olympic Games in Sochi. Via social media, they will show in real time what people are feeling during the competitions.

Social robotics: Beyond the uncanny valley

Dec 29, 2011

(PhysOrg.com) -- From science fiction and academia through assembly lines and telemedicine, robots have become both conceptually and physically ubiquitous. Technologically, robotics technology has advanced ...

Sign language puzzle solved

Dec 15, 2009

(PhysOrg.com) -- Scientists have known for 40 years that even though it takes longer to use sign language to sign individual words, sentences can be signed, on average, in the same time it takes to say them, ...

Recommended for you

Know the brain, and its axons, by the clothes they wear

22 hours ago

(Medical Xpress)—It is widely know that the grey matter of the brain is grey because it is dense with cell bodies and capillaries. The white matter is almost entirely composed of lipid-based myelin, but ...

Turning off depression in the brain

Apr 17, 2014

Scientists have traced vulnerability to depression-like behaviors in mice to out-of-balance electrical activity inside neurons of the brain's reward circuit and experimentally reversed it – but there's ...

Rapid whole-brain imaging with single cell resolution

Apr 17, 2014

A major challenge of systems biology is understanding how phenomena at the cellular scale correlate with activity at the organism level. A concerted effort has been made especially in the brain, as scientists are aiming to ...

User comments : 0

More news stories

Health care site flagged in Heartbleed review

People with accounts on the enrollment website for President Barack Obama's signature health care law are being told to change their passwords following an administration-wide review of the government's vulnerability to the ...

Airbnb rental site raises $450 mn

Online lodging listings website Airbnb inked a $450 million funding deal with investors led by TPG, a source close to the matter said Friday.