Researchers produce 'neural fingerprint' of speech recognition

Nov 10, 2008

Scientists from Maastricht University (Netherlands) have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article "Who" is Saying "What"? Brain-Based Decoding of Human Voice and Speech the four authors demonstrate that speech sounds and voices can be identified by means of a unique 'neural fingerprint' in the listener's brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice.

The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity. Just like real fingerprints, these neural patterns are both unique and specific: the neural fingerprint of a speech sound does not change if uttered by somebody else and a speaker's fingerprint remains the same, even if this person says something different.

Moreover, this study revealed that part of the complex sound-decoding process takes place in areas of the brain previously just associated with the early stages of sound processing. Existing neurocognitive models assume that processing sounds actively involves different regions of the brain according to a certain hierarchy: after a simple processing in the auditory cortex the more complex analysis (speech sounds into words) takes place in specialised regions of the brain.

However, the findings from this study imply a less hierarchal processing of speech that is spread out more across the brain.

Source: Netherlands Organization for Scientific Research

Explore further: New material for creating artificial blood vessels

Related Stories

Applause triggers award for Meccanoid robot in Vegas

Jan 12, 2015

The open source robotic building platform, Meccanoid G15 KS from Spin Master, won top prize in a "Last Gadget Standing" showdown at CES on Thursday. Damon Poeter of PCMag described The Meccanoid G15 KS as ...

Method to reconstruct overt and covert speech

Oct 31, 2014

Can scientists read the mind, picking up inner thoughts? Interesting research has emerged in that direction. According to a report from New Scientist, researchers discuss their findings in converting brain ...

Buzzed birds slur their songs, researchers find

Dec 30, 2014

You know how that guy at the karaoke bar singing Journey's "Don't Stop Believin' " sounds a little off after he's had a few drinks? The same goes for buzzed birds, according to a team led by researchers from ...

Recommended for you

New material for creating artificial blood vessels

14 minutes ago

Blocked blood vessels can quickly become dangerous. It is often necessary to replace a blood vessel – either by another vessel taken from the body or even by artificial vascular prostheses. Together, Vienna ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.