Zeroing in on the brain's speech 'receiver'

June 20, 2007

A particular resonance pattern in the brain’s auditory processing region appears to be key to its ability to discriminate speech, researchers have found. They found that the inherent rhythm of neural activity called “theta band” specifically reacts to spoken sentences by changing its phase. The researchers also noted that the natural oscillation of this frequency provides further evidence that the brain samples speech segments about the length of a syllable.

The findings represent the first time that such a broad neural response has been identified as central to perceiving the highly complex dynamics of human speech, said the researchers. Previous studies have explored the responses of individual neurons to speech sounds, but not the response of the auditory cortex as a whole.

David Poeppel and Huan Luo published their findings in the June 21, 2007 issue of the journal Neuron, published by Cell Press.

In their experiments, the researchers asked volunteers to listen to spoken sentences such as “He held his arms close to his sides and made himself as small as possible.” At the same time, the subjects’ brains were scanned using magnetoencephalography. In this imaging technique, sensitive detectors are used to measure the magnetic fields produced by electrical activity in brain regions.

Poeppel and Luo pinpointed the theta band—which oscillates between four and eight cycles per second—as one that changed its phase pattern with unique sensitivity and specificity in response to the spoken sentences. What’s more, as the researchers degraded the intelligibility of the sentences, the theta band pattern lost its tracking resonance with the speech.

The researchers said their findings suggest that the brain discriminates speech by modulating the phase of the continuously generated theta wave in response to the incoming speech signal. What’s more, they said, the time-dependent characteristics of this theta wave suggest that the brain samples the incoming speech in “chunks” that are about the length of a syllable from any given language.

Source: Cell Press

Explore further: Google's release of TensorFlow could be a game-changer in the future of AI

Related Stories

Synapses need only few bits

September 22, 2015

Deep learning is possibly the most exciting branch of contemporary machine learning. Complex image analysis, speech recognition and self-driving cars are just a few popular examples of a multitude of new applications where ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

( -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.