Seeing While Hearing Speeds Brain's Processing of Speech

Jan 15, 2005

While the R&B classic "I Heard It Through the Grapevine" advises you to "believe half of what you see and none of what you hear," a University of Maryland study has found that seeing and hearing together speed up the brain's ability to process what someone is saying -- whether or not they're speaking the truth.
The study, published in the Proceedings of the National Academy of Sciences, combines neuroscience and linguistics to confirm for the first time that seeing the speaker talk -- called visual speech -- helps the brain process the words they are saying -- the auditory speech -- faster than if the words are heard only.

David Poeppel, associate professor of linguistics at Maryland and senior author of the study, says the study indicates that when a listener can see the speaker's mouth, the listener's brain predicts what sound is about to be heard, a process called predictive coding.

"Moving the mouth comes before the sound," Poeppel said. "The brain uses the slightly preceding visual information to make a prediction, almost instantaneously, of what the sound will be."

That combination of visual and auditory speech, says Poeppel, "gives you the information to get to recognition faster and more accurately" than is done by hearing alone.

Poeppel, along with his Ph.D. student Virginie van Wassenhove and Ken W. Grant, of the Walter Reed Army Medical Center, arrived at their findings by doing an EEG (Electroencephalographic) study of 26 participants, all native American English speakers.

Their brain activity was measured while they listened to just auditory speech and again while they watched video in which a woman spoke "pa," "ta" and "ka," common English syllables.

Not only did the brain process the sounds faster when visual and auditory speech were combined, it took "less effort" to reach recognition earlier.

"This discovery contradicts the commonly held notion that audio and visual speech together are more than the sum of their parts," Poeppel said. "It actually takes 'less' brain activity to process the information and do it in less time."

The study also gives the first neurological evidence to support the Analysis-by-Synthesis model of speech processing, Morris Halle's and Ken Stevens's 1950's model based on the notion of predictive coding.

"This is the first time to connect audiovisual speech to the theory," says Poeppel. "We're increasingly learning about the importance of multi-sensory integration."

The study was supported by a grant from the National Institutes of Health.

Source: University of Maryland

Explore further: Ancient clay seals may shed light on biblical era

add to favorites email to friend print save as pdf

Related Stories

Scientists to study synthetic telepathy

Aug 13, 2008

A team of UC Irvine scientists has been awarded a $4 million grant from the U.S. Army Research Office to study the neuroscientific and signal-processing foundations of synthetic telepathy.

Zeroing in on the brain's speech 'receiver'

Jun 20, 2007

A particular resonance pattern in the brain’s auditory processing region appears to be key to its ability to discriminate speech, researchers have found. They found that the inherent rhythm of neural activity called “theta ...

Recommended for you

Ancient clay seals may shed light on biblical era

4 hours ago

Impressions from ancient clay seals found at a small site in Israel east of Gaza are signs of government in an area thought to be entirely rural during the 10th century B.C., says Mississippi State University archaeologist ...

Digging up the 'Spanish Vikings'

Dec 19, 2014

The fearsome reputation of the Vikings has made them the subject of countless exhibitions, books and films - however, surprisingly little is known about their more southerly exploits in Spain.

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.