Do you hear what i see?

February 20, 2007

New research pinpoints specific areas in sound processing centers in the brains of macaque monkeys that shows enhanced activity when the animals watch a video.

This study confirms a number of recent findings but contradicts classical thinking, in which hearing, taste, touch, sight, and smell are each processed in distinct areas of the brain and only later integrated. The new research, led by Christoph Kayser, PhD, at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, was published in the February 21 issue of The Journal of Neuroscience.

"This study confirms that what we used to call the ‘auditory cortex’ should really be thought of as much more complex in terms of its response properties," says Robert Zatorre, PhD, head of the auditory cognitive neuroscience laboratory at McGill University. "The textbook-standard view of sensory systems as isolated from one another is no longer tenable." Zatorre did not participate in the study.

Kayser’s team used functional magnetic resonance imaging to draw a map of 11 small, tightly packed fields in the monkeys’ auditory cortex that differ by the frequency of sound they process. Scans recorded activity in the monkeys’ brains while they watched a video, with and without sound, and listened separately to the accompanying sound. The researchers found that fields in the hindmost part of the auditory cortex showed activity when the monkeys watched the video without sound, and activity was enhanced when the video was presented simultaneously with the sound.

"This finding suggests that sensory integration, which is so fundamental to complex mental activity, takes place at very early processing stages," says Daniel Tranel, PhD, of the University of Iowa, who is not affiliated with the study. "This knowledge could help scientists pinpoint sources of extraordinary sensory processing, such as creativity and genius, as well as abnormal sensory processing, as seen in schizophrenia."

Kayser notes that the findings also could be used to reveal the role of audio-visual integration in communication or to help pin down where sounds are coming from. "Clearly, our acoustical understanding often improves if we can see the lips of the speaker—for example at a crowded cocktail party," he says. "However, currently it is not clear whether and how audio-visual interactions are specialized for the processing of communication signals. "The present study clearly shows where in the auditory system researchers have to focus."

Source: Society for Neuroscience

Explore further: 'Laboratory Biorisk Management' details safety, security methods for biosciences sites

Related Stories

Introducing the single-cell maze runner

August 19, 2015

In a paper appearing in Scientific Reports today, the motion of micro-organisms as they swim through various types of fluid channels show "quite strange and new" responses for single cell organisms, including the performance ...

Rosetta hits 'milestone' in comet's run past Sun

August 13, 2015

The European space probe Rosetta captured a range of scientific data Thursday as it trailed an ancient comet past the Sun which could help scientists better understand the origins of life on Earth.

Programming materials for better designs

August 12, 2015

We often think of the everyday materials we use to build our human world as static, but we should think again: MIT's Self-Assembly Lab programs such materials to transform themselves to handle tasks more simply and efficiently, ...

A model for ageing

August 7, 2015

Life is short, especially for the killifish, Nothobranchius furzeri: It lives for only a few months and then its time is up. During that short lifespan it passes through every phase of life from larva to venerable old fish. ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.