Tuning in to a new hearing mechanism

Nov 10, 2010 by Anne Trafton

More than 30 million Americans suffer from hearing loss, and about 6 million wear hearing aids. While those devices can boost the intensity of sounds coming into the ear, they are often ineffective in loud environments such as restaurants, where you need to pick out the voice of your dining companion from background noise.

To do that, you need to be able to distinguish sounds with subtle differences. The is exquisitely adapted for that task, but the underlying mechanism responsible for this selectivity has remained unclear. Now, new findings from MIT researchers reveal an entirely new mechanism by which the human ear sorts sounds, a discovery that could lead to improved, next-generation assistive hearing devices.

“We’ve incorporated into hearing aids everything we know about how sounds are sorted, but they’re still not very effective in problematic environments such as restaurants, or anywhere there are competing speakers,” says Dennis Freeman, MIT professor of electrical engineering, who is leading the research team. “If we knew how the ear sorts sounds, we could build an apparatus that sorts them the same way.”

In a 2007 Proceedings of the National Academy of Sciences paper, Freeman and his associates A.J. Aranyosi and lead author Roozbeh Ghaffari showed that the tiny, gel-like tectorial membrane, located in the inner ear, coordinates with the basilar membrane to fine-tune the ear’s ability to distinguish sounds. Last month, they reported in Nature Communications that a mutation in one of the proteins of the tectorial membrane interferes with that process.

Sound waves

It has been known for more than 50 years that sound waves entering the ear travel along the spiral-shaped, fluid-filled cochlea in the inner ear. Hair cells lining the ribbon-like basilar membrane in the cochlea translate those sound waves into electrical impulses that are sent to the brain. As sound waves travel along the basilar membrane, they “break” at different points, much as ocean waves break on the beach. The break location helps the ear to sort sounds of different frequencies.

Until recently, the role of the tectorial membrane in this process was not well understood.

In their 2007 paper, Freeman and Ghaffari showed that the tectorial membrane carries waves that move from side to side, while up-and-down waves travel along the basilar membrane. Together, the two membranes can work to activate enough hair cells so that individual sounds are detected, but not so many that sounds can’t be distinguished from each other.

Made of a special gel-like material not found elsewhere in the body, the entire tectorial membrane could fit inside a one-inch segment of human hair. The tectorial membrane consists of three specialized proteins, making them the ideal targets of genetic studies of hearing.

One of those proteins is called beta-tectorin (encoded by the TectB gene), which was the focus of Ghaffari, Aranyosi and Freeman’s recent Nature Communications paper. The researchers collaborated with biologist Guy Richardson of the University of Sussex and found that in mice with the TectB gene missing, sound waves did not travel as fast or as far along the tectorial membrane as waves in normal tectorial membranes. When the tectorial membrane is not functioning properly in these mice, sounds stimulate a smaller number of hair cells, making the ear less sensitive and overly selective.

Until the recent MIT studies on the tectorial membrane, researchers trying to come up with a model to explain the membrane’s role didn’t have a good way to test their theories, says Karl Grosh, professor of mechanical and biomedical engineering at the University of Michigan. “This is a very nice piece of work that starts to bring together the modeling and experimental results in a way that is very satisfying,” he says.

Mammalian hearing systems are extremely similar across species, which leads the MIT researchers to believe that their findings in mice are applicable to human hearing as well.

New designs

Most hearing aids consist of a microphone that receives waves from the environment, and a loudspeaker that amplifies them and sends them into the middle and inner ear. Over the decades, refinements have been made to the basic design, but no one has been able to overcome a fundamental problem: Instead of selectively amplifying one person’s voice, all sounds are amplified, including .

Freeman believes that by incorporating the interactions between the tectorial membrane and basilar membrane traveling waves, this new model could improve our understanding of hearing mechanisms and lead to with enhanced signal processing. Such a device could help tune in to a specific range of frequencies, for example, those of the person’s voice that you want to listen to. Only those sounds would be amplified.

Freeman, who has from working in a noisy factory as a teenager and side effects of a medicine he was given for rheumatic fever, worked on hearing-aid designs 25 years ago. However, he was discouraged by the fact that most new ideas for hearing-aid design did not offer significant improvements. He decided to conduct basic research in this area, hoping that understanding the ear better would naturally lead to new approaches to hearing-aid design.

“We’re really trying to figure out the algorithm by which sounds are sorted, because if we could figure that out, we could put it into a machine,” says Freeman, who is a member of MIT’s Research Laboratory of Electronics and the Harvard-MIT Division of Health Sciences and Technology. His group’s recent tectorial membrane research was funded by the National Institutes of Health.

Next, the researchers are continuing their studies of tectorial membrane protein mutations to see if tectorial traveling waves play similar roles in other genetic disorders of hearing.


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Explore further: Technology translation engine launches 'Organs-on-Chips' company

More information: www.rle.mit.edu/rleonline/rese… omechanicsGroup.html

Related Stories

One membrane, many frequencies

Mar 27, 2007

Modern hearing aids, though quite sophisticated, still do not faithfully reproduce sound as hearing people perceive it. New findings at the Weizmann Institute of Science shed light on a crucial mechanism for discerning different ...

The cochlea's spiral shape boosts low frequencies

May 09, 2006

The next time someone whispers in your ear, think "cochlea." The cochlea is the marvelous structure in the inner ear that is shaped like a snail shell and transforms sounds into the nerve impulses that your ...

How a locust's eardrum could lead to tiny microphones

Mar 31, 2006

Being able to hear the smallest of noises is a matter of life or death for many insects, but for the scientists studying their hearing systems understanding how insect ears can be so sensitive could lead to ...

Recommended for you

Human brain has coping mechanism for dehydration

1 hour ago

(HealthDay)—Although dehydration significantly reduces blood flow to the brain, researchers in England have found that the brain compensates by increasing the amount of oxygen it extracts from the blood. ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

ArtflDgr
not rated yet Nov 10, 2010
the method used by people with normal ears is the inclusion of distance information. this makes for a pair of spots, one real, one phantom that one can pay attention to more than sound from an alternative set of locations.

i have worked on this problem for years in decoding a map of a room and then mathematically generating a second phantom ear so as to compute location information to isolate sources using only one microphone.