Human brain becomes tuned to voices and emotional tone of voice during infancy

March 24, 2010

New research finds that the brains of infants as young as 7 months old demonstrate a sensitivity to the human voice and to emotions communicated through the voice that is remarkably similar to what is observed in the brains of adults. The study, published by Cell Press in the March 25 issue of the journal Neuron, probes the origins of voice processing in the human brain and may provide important insight into neurodevelopmental disorders such as autism.

Dr. Tobias Grossmann from the Centre for Brain and Cognitive Development at the University of London led the study which was performed in Dr. Angela D. Friederici's laboratory at the Max Planck Institute for Human Cognitive and Brain Sciences in Germany. The researchers used near-infrared spectroscopy to investigate when during development regions in temporal cortex become specifically sensitive to the human voice. These specific cortical regions have been shown to play a key role in processing spoken language in adults. Grossmann and colleagues observed that 7-month-olds but not 4-month-olds showed adult-like increased responses in the temporal cortex in response to the when compared to nonvocal sounds, suggesting that voice sensitivity emerges between 4 and 7 months of age.

Another important question addressed in this study was whether activity in infants' voice-sensitive is modulated by emotional prosody. Prosody, essentially the "music" of speech, can reflect the feelings of the speaker, thereby helping to convey the context of language. In humans, sensitivity to emotional prosody is crucial for social communication. The researchers observed that a voice-sensitive region in the right temporal cortex showed increased activity when 7-month-old infants listened to words spoken with emotional (angry or happy) prosody. Such a modulation of by emotional signals is thought to be a fundamental to prioritize the processing of significant stimuli in the environment.

"Our findings demonstrate that voice-sensitive regions are already specialized and modulated by emotional information by the age of 7 months and raise the possibility that the critical neurodevelopmental processes underlying impaired voice-processing reported in disorders like autism might occur before 7 months," explains Dr. Grossmann. "Therefore, in future work the current approach could be used to assess individual differences in infants' responses to voices and emotional prosody and might thus serve as one of potentially multiple markers that can help with an early identification of infants at risk for a neurodevelopmental disorder."

Explore further: UCLA Study First to Show Autistic Brains Can Be Trained to Recognize Visual and Vocal Cues

More information: Friederici et al.: “Report: The Developmental Origins of Voice Processing in the Human Brain.” Publishing in Neuron 65, 852-858, March 25, 2010. DOI:10.1016/j.neuron.2010.03.001

Related Stories

Can you see the emotions I hear? Study says yes

May 14, 2009

By observing the pattern of activity in the brain, scientists have discovered they can "read" whether a person just heard words spoken in anger, joy, relief, or sadness. The discovery, reported online on May 14th in Current ...

Babies' brains tuned to sharing attention with others

January 27, 2010

Children as young as five months old will follow the gaze of an adult towards an object and engage in joint attention, according to research funded by the Wellcome Trust and the Medical Research Council. The findings, published ...

Blind people use both visual and auditory cortices to hear

February 16, 2010

(PhysOrg.com) -- Blind people have brains that are rewired to allow their visual cortex to improve hearing abilities. Yet they continue to access specialized areas to recognize human voices, according to a study published ...

Recommended for you

How the finch changes its tune

August 3, 2015

Like top musicians, songbirds train from a young age to weed out errors and trim variability from their songs, ultimately becoming consistent and reliable performers. But as with human musicians, even the best are not machines. ...

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.