Zeroing in on the brain's speech 'receiver'

June 20, 2007

A particular resonance pattern in the brain’s auditory processing region appears to be key to its ability to discriminate speech, researchers have found. They found that the inherent rhythm of neural activity called “theta band” specifically reacts to spoken sentences by changing its phase. The researchers also noted that the natural oscillation of this frequency provides further evidence that the brain samples speech segments about the length of a syllable.

The findings represent the first time that such a broad neural response has been identified as central to perceiving the highly complex dynamics of human speech, said the researchers. Previous studies have explored the responses of individual neurons to speech sounds, but not the response of the auditory cortex as a whole.

David Poeppel and Huan Luo published their findings in the June 21, 2007 issue of the journal Neuron, published by Cell Press.

In their experiments, the researchers asked volunteers to listen to spoken sentences such as “He held his arms close to his sides and made himself as small as possible.” At the same time, the subjects’ brains were scanned using magnetoencephalography. In this imaging technique, sensitive detectors are used to measure the magnetic fields produced by electrical activity in brain regions.

Poeppel and Luo pinpointed the theta band—which oscillates between four and eight cycles per second—as one that changed its phase pattern with unique sensitivity and specificity in response to the spoken sentences. What’s more, as the researchers degraded the intelligibility of the sentences, the theta band pattern lost its tracking resonance with the speech.

The researchers said their findings suggest that the brain discriminates speech by modulating the phase of the continuously generated theta wave in response to the incoming speech signal. What’s more, they said, the time-dependent characteristics of this theta wave suggest that the brain samples the incoming speech in “chunks” that are about the length of a syllable from any given language.

Source: Cell Press

Explore further: Scientists home in on origin of human, chimpanzee facial differences

Related Stories

Grammar: Eventually the brain opts for the easy route

August 13, 2015

Languages are constantly evolving—and grammar is no exception. The way in which the brain processes language triggers adjustments. If the brain has to exert itself too much to cope with difficult case constructions, it ...

New metamaterial device solves the cocktail party problem

August 11, 2015

(—A team of researchers at Duke University has found a way to solve what is known as the cocktail party problem, getting a computer to pick out different human voices among multiple speakers in a single room. In ...

What neuroscience can learn from computer science

August 10, 2015

What do computers and brains have in common? Computers are made to solve the same problems that brains solve. Computers, however, rely on a drastically different hardware, which makes them good at different kinds of problem ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

( -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.