Researchers find how brain hears the sound of silence (w/ Video)

February 10, 2010
Michael Wehr, professor of psychology and neuroscience at the University of Oregon, with undergraduate student researcher Xiang Gao, a co-author. Credit: Michael McDermott

A team of University of Oregon researchers have isolated an independent processing channel of synapses inside the brain's auditory cortex that deals specifically with shutting off sound processing at appropriate times. Such regulation is vital for hearing and for understanding speech.

The discovery, detailed in the Feb. 11 issue of the journal Neuron, goes against a long-held assumption that the signaling of a sound's appearance and its subsequent disappearance are both handled by the same pathway. The new finding, which supports an emerging theory that a separate set of synapses is responsible, could lead to new, distinctly targeted therapies such as improved hearing devices, said Michael Wehr, a professor of psychology and member of the UO Institute of .

"It looks like there is a whole separate channel that goes all the way from the ear up to the brain that is specialized to process sound offsets," Wehr said. The two channels finally come together in a brain region called the , situated in the .

This video is not supported by your browser at this time.
Michael Wehr of the University of Oregon discusses the ramifications of his lab's newly published findings. Credit: University of Oregon

To do the research, Wehr and two UO undergraduate students -- lead author Ben Scholl, now a graduate student at the Oregon Health and Science University in Portland, and Xiang Gao -- monitored the activity of and their connecting synapses as rats were exposed to millisecond bursts of tones, looking at the responses to both the start and end of a sound. They tested varying lengths and frequencies of sounds in a series of experiments.

It became clear, the researchers found, that one set of synapses responded "very strongly at the onset of sounds," but a different set of responded to the sudden disappearance of sounds. There was no overlap of the two responding sets, the researchers noted. The end of one sound did not affect the response to a new sound, thus reinforcing the idea of separate processing channels.

The UO team also noted that responses to the end of a sound involved different frequency tuning, duration and amplitude than those involved in processing the start of a sound, findings that agree with a trend cited in at least three other studies in the last decade.

"Being able to perceive when sound stops is very important for speech processing," Wehr said. "One of the really hard problems in speech is finding the boundaries between the different parts of words. It is really not well understood how the brain does that."

As an example, he noted the difficulty some people have when they are at a noisy cocktail party and are trying to follow one conversation amid competing background noises. "We think that we've discovered brain mechanisms that are important in finding the necessary boundaries between words that help to allow for successful speech recognition and hearing," he said.

The research -- funded in part by the UO's Robert and Beverly Lewis Center for Neuroimaging Fund -- aims to provide a general understanding of how areas of the function. The new findings, Wehr said, could also prove useful in working with children who have deficits in speech and learning, as well as in the design of hearing aids and cochlear implants. He also noted that people with dyslexia have problems defining the boundaries of sounds in speech, and tapping these processing areas in therapy could boost reading skills.

Explore further: Scientists develop better method for converting sounds to electronic signals

Related Stories

Sound training rewires dyslexic children's brains for reading

October 30, 2007

Some children with dyslexia struggle to read because their brains aren't properly wired to process fast-changing sounds, according to a brain-imaging study published this month in the journal Restorative Neurology and Neuroscience ...

Lend me your ears -- and the world will sound very different

January 14, 2008

Recognising people, objects or animals by the sound they make is an important survival skill and something most of us take for granted. But very similar objects can physically make very dissimilar sounds and we are able to ...

Scientists reaching consensus on how brain processes speech

May 26, 2009

Neuroscientists feel they are much closer to an accepted unified theory about how the brain processes speech and language, according to a scientist at Georgetown University Medical Center who first laid the concepts a decade ...

New brain findings on dyslexic children

November 11, 2009

The vast majority of school-aged children can focus on the voice of a teacher amid the cacophony of the typical classroom thanks to a brain that automatically focuses on relevant, predictable and repeating auditory information, ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.