How do we combine faces and voices?

Mar 09, 2011

Human social interactions are shaped by our ability to recognise people. Faces and voices are known to be some of the key features that enable us to identify individual people, and they are rich in information such as gender, age, and body size, that lead to a unique identity for a person. A large body of neuropsychological and neuroimaging research has already determined the various brain regions responsible for face recognition and voice recognition separately, but exactly how our brain goes about combining the two different types of information (visual and auditory) is still unknown.

Now a new study, published in the March 2011 issue of Elsevier's , has revealed the brain networks involved in this "cross-modal" person recognition.

A team of researchers in Belgium used (fMRI) to measure in 14 participants while they performed a task in which they recognised previously learned faces, voices, and voice-face associations. Dr Frédéric Joassin, Dr Salvatore Campanella, and colleagues compared the brain areas activated when recognising people using information from only their (visual areas), or only their voices (auditory areas), to those activated when using the combined information. They found that voice-face recognition activated specific "cross-modal" regions of the brain, located in areas known as the left angular gyrus and the right hippocampus. Further analysis also confirmed that the right hippocampus was connected to the separate visual and auditory areas of the brain.

Recognising a person from the combined information of their face and voice therefore relies not only on the same brain networks involved in using only visual or only auditory information, but also on associated with attention (left angular gyrus) and memory (hippocampus). According to the authors, the findings support a dynamic vision of cross-modal interactions in which the areas involved in processing both face and voice information are not simply the final stage of a hierarchical model, but rather, they may work in parallel and influence each other.

Explore further: EEG study findings reveal how fear is processed in the brain

More information: The article is "Cross-modal interactions between human faces and voices involved in person recognition" by Frédéric Joassin, Mauro Pesenti, Pierre Maurage, Emilie Verreckt, Raymond Bruyer, Salvatore Campanella, and appears in Cortex, Volume 47, Issue 3 (March 2011). http://www.sciencedirect.com/science/journal/00109452

add to favorites email to friend print save as pdf

Related Stories

Hearing changes how we perceive gender

Oct 24, 2007

Think about the confused feelings that occur when you meet someone whose tone of voice doesn’t seem to quite fit with his or her gender. A new study by neuroscientists from Northwestern University focuses on the brain’s ...

Can't place that face? The trouble may be in your neurons

Jul 28, 2010

A specific area in our brains is responsible for processing information about human and animal faces, both how we recognize them and how we interpret facial expressions. Now, Tel Aviv University research is exploring what ...

Are you phonagnosic?

Oct 27, 2008

The first known case of someone born without the ability to recognise voices has been reported in a paper by UCL (University College London) researchers, in a study of a rare condition known as phonagnosia. The UCL team are ...

Where the brain combines what's heard and felt

Oct 19, 2005

Using functional magnetic resonance imaging, researchers at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany have showed that the integration of auditory and touch information takes place in the 'hearing ...

Recommended for you

User comments : 0