Researchers produce 'neural fingerprint' of speech recognition

Nov 10, 2008

Scientists from Maastricht University (Netherlands) have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article "Who" is Saying "What"? Brain-Based Decoding of Human Voice and Speech the four authors demonstrate that speech sounds and voices can be identified by means of a unique 'neural fingerprint' in the listener's brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice.

The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity. Just like real fingerprints, these neural patterns are both unique and specific: the neural fingerprint of a speech sound does not change if uttered by somebody else and a speaker's fingerprint remains the same, even if this person says something different.

Moreover, this study revealed that part of the complex sound-decoding process takes place in areas of the brain previously just associated with the early stages of sound processing. Existing neurocognitive models assume that processing sounds actively involves different regions of the brain according to a certain hierarchy: after a simple processing in the auditory cortex the more complex analysis (speech sounds into words) takes place in specialised regions of the brain.

However, the findings from this study imply a less hierarchal processing of speech that is spread out more across the brain.

Source: Netherlands Organization for Scientific Research

Explore further: Gamers helping in Ebola research

add to favorites email to friend print save as pdf

Related Stories

What frog courtship can tell us about human small talk

May 13, 2014

If you've ever heard the boisterous courtship sounds being made at night by male frogs gathered around a pond or "watering hole" to attract mates, you may have noticed some communication similarities to those ...

Tablet computers for global literacy

May 01, 2014

In two remote villages in rural Ethiopia, a team of literacy and technology experts from Tufts and MIT launched a grand experiment with a simple gesture: they dropped off a handful of tablet computers for ...

Octopus got your tongue?

Jan 07, 2014

It's an unusual coupling: A linguist and a marine biologist are working together to investigate the human tongue. In their study, the USC Dornsife researchers are using two species of octopus and tiny worms ...

Recommended for you

Gamers helping in Ebola research

14 hours ago

Months before the recent Ebola outbreak erupted in Western Africa, killing more than a thousand people, scientists at the University of Washington's Institute for Protein Design were looking for a way to stop the deadly virus.

Carcinogenic role of a protein in liver decoded

17 hours ago

The human protein EGFR controls cell growth. It has mutated in case of many cancer cells or exists in excessive numbers. For this reason it serves as a point of attack for target-oriented therapies. A study ...

A new way to diagnose malaria, using magnetic fields

Aug 31, 2014

Over the past several decades, malaria diagnosis has changed very little. After taking a blood sample from a patient, a technician smears the blood across a glass slide, stains it with a special dye, and ...

User comments : 0