Music in our ears: The science of timbre

Nov 01, 2012

New research, published in PLOS Computational Biology, offers insight into the neural underpinnings of musical timbre. Mounya Elhilali, of Johns Hopkins University and colleagues have used mathematical models based on experiments in both animals and humans to accurately predict sound source recognition and perceptual timbre judgments by human listeners.

A major contributor to our ability to analyze music and recognize instruments is the concept known as 'timbre'. Timbre is a hard-to-quantify concept loosely defined as everything in music that isn't duration, loudness or pitch. For instance, timbre comes into play when we are able to instantly decide whether a sound is coming from a violin or a piano.

The researchers at The John Hopkins University set out to develop a that would simulate how the brain works when it receives auditory signals, how it looks for specific features and whether something is there that allows the brain to discern these different qualities.

The authors devised a computer model to accurately mimic how specific transform sounds into the that allow us to recognize the type of sounds we are listening to. The model was able to correctly identify which instrument was playing (out of a total of 13 instruments) to an of 98.7 percent.

The model mirrored how human listeners make judgment calls regarding timbre. The researchers asked 20 people to listen to two sounds played by different musical instruments. The listeners were then asked to rate how similar the sounds seemed. A violin and a cello are perceived as closer to each other than a violin and a flute. The researchers also found that wind and percussive instruments tend to overall be the most different from each other, followed by strings and percussions, then strings and winds. These subtle judgments of timbre quality were also reproduced by the computer model.

"There is much to be learned from how the human brain processes complex information such as musical timbre and translating this knowledge into improved computer systems and hearing technologies", Elhilali said.

Explore further: Researchers help Boston Marathon organizers plan for 2014 race

More information: Patil K, Pressnitzer D, Shamma S, Elhilali M (2012) Music in Our Ears: The Biological Bases of Musical Timbre Perception. PLoS Comput Biol 8(11):e1002759. doi:10.1371/journal.pcbi.1002759

add to favorites email to friend print save as pdf

Related Stories

Lend me your ears -- and the world will sound very different

Jan 14, 2008

Recognising people, objects or animals by the sound they make is an important survival skill and something most of us take for granted. But very similar objects can physically make very dissimilar sounds and we are able to ...

Theory: Music underlies language acquisition

Sep 18, 2012

(Medical Xpress)—Contrary to the prevailing theories that music and language are cognitively separate or that music is a byproduct of language, theorists at Rice University's Shepherd School of Music and the University ...

Recommended for you

Egypt archaeologists find ancient writer's tomb

10 hours ago

Egypt's minister of antiquities says a team of Spanish archaeologists has discovered two tombs in the southern part of the country, one of them belonging to a writer and containing a trove of artifacts including reed pens ...

Study finds law dramatically curbing need for speed

Apr 18, 2014

Almost seven years have passed since Ontario's street-racing legislation hit the books and, according to one Western researcher, it has succeeded in putting the brakes on the number of convictions and, more importantly, injuries ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
1 / 5 (1) Nov 04, 2012
Kudos.
Why stop there?
Go molecular.
Vary the brain specific estrogen levels.
Obviously classifier(s) for a molecular machine vector support model must be found.

http://phys.org/n...483.html

The processing of the acoustic signal in the cochlea is modeled as a bank of 128 constant-Q asymmetric bandpass filters equally spaced on the logarithmic frequency scale spanning 5.3 octaves. The cochlear output is then transduced into inner hair cells potentials via a high pass and low pass operation.
Further...
Unlike a simple Fourier analysis of the signal, the cochlear filtering stage operated on a logarithmic axis with highly asymmetric filters.

More news stories

Egypt archaeologists find ancient writer's tomb

Egypt's minister of antiquities says a team of Spanish archaeologists has discovered two tombs in the southern part of the country, one of them belonging to a writer and containing a trove of artifacts including reed pens ...

NASA's space station Robonaut finally getting legs

Robonaut, the first out-of-this-world humanoid, is finally getting its space legs. For three years, Robonaut has had to manage from the waist up. This new pair of legs means the experimental robot—now stuck ...

Ex-Apple chief plans mobile phone for India

Former Apple chief executive John Sculley, whose marketing skills helped bring the personal computer to desktops worldwide, says he plans to launch a mobile phone in India to exploit its still largely untapped ...

Filipino tests negative for Middle East virus

A Filipino nurse who tested positive for the Middle East virus has been found free of infection in a subsequent examination after he returned home, Philippine health officials said Saturday.

Airbnb rental site raises $450 mn

Online lodging listings website Airbnb inked a $450 million funding deal with investors led by TPG, a source close to the matter said Friday.