Study reveals potential breakthrough in hearing technology

Nov 18, 2013 by Pam Frost Gorder
Study reveals potential breakthrough in hearing technology

Computer engineers and hearing scientists at The Ohio State University have made a potential breakthrough in solving a 50-year-old problem in hearing technology: how to help the hearing-impaired understand speech in the midst of background noise.

In the Journal of the Acoustical Society of America, they describe how they used the latest developments in neural networks to boost test subjects' recognition of spoken words from as low as 10 percent to as high as 90 percent.

The researchers hope the technology will pave the way for next-generation digital aids. Such hearing aids could even reside inside smartphones; the phones would do the computer processing, and broadcast the enhanced signal to ultra-small earpieces wirelessly.

Several patents are pending on the technology, and the researchers are working with leading manufacturer Starkey, as well as others around the world to develop the technology.

Conquering background noise has been a "holy grail" in hearing technology for half a century, explained Eric Healy, professor of speech and hearing science and director of Ohio State's Speech Psychoacoustics Laboratory.

The desire to understand one voice in roomful of chatter has been dubbed the "cocktail party problem."

"Focusing on what one person is saying and ignoring the rest is something that normal-hearing listeners are very good at, and hearing-impaired listeners are very bad at," Healy said. "We've come up with a way to do the job for them, and make their limitations moot."

Key to the technology is a computer algorithm developed by DeLiang "Leon" Wang, professor of computer science and engineering, and his team. It quickly analyzes speech and removes most of the background noise.

This video is not supported by your browser at this time.
Computer engineers and hearing scientists at The Ohio State University have made a potential breakthrough in solving a 50-year-old problem in hearing technology: how to help the hearing-impaired understand speech in the midst of background noise. Researchers played this sound clip for study participants to test whether they could hear a single, clear sentence being said amongst a background of babble. Credit: The Ohio State University Speech Psychoacoustics Laboratory

"For 50 years, researchers have tried to pull out the speech from the background noise. That hasn't worked, so we decided to try a very different approach: classify the noisy speech and retain only the parts where speech dominates the noise," Wang said.

In initial tests, Healy and doctoral student Sarah Yoho removed twelve hearing-impaired volunteers' hearing aids, then played recordings of speech obscured by background noise over headphones. They asked the participants to repeat the words they heard. Then they re-performed the same test, after processing the recordings with the algorithm to remove background noise.

This video is not supported by your browser at this time.
Computer engineers and hearing scientists at The Ohio State University have made a potential breakthrough in solving a 50-year-old problem in hearing technology: how to help the hearing-impaired understand speech in the midst of background noise. In this clip, a computer algorithm has removed the background babble, so that a single, clear sentence can be heard: "They ate the lemon pie." Credit: The Ohio State University Speech Psychoacoustics Laboratory

They tested the algorithm's effectiveness against "stationary noise"—a constant noise like the hum of an air conditioner—and then with the babble of other voices in the background.

The algorithm was particularly affective against background babble, improving hearing-impaired people's comprehension from 25 percent to close to 85 percent on average. Against stationary noise, the algorithm improved comprehension from an average of 35 percent to 85 percent.

For comparison, the researchers repeated the test with twelve undergraduate Ohio State students who were not hearing-impaired. They found that scores for the normal-hearing listeners without the aid of the algorithm's processing were lower than those for the hearing-impaired listeners with processing..

"That means that hearing-impaired people who had the benefit of this algorithm could hear better than students with no hearing loss," Healy said.

A new $1.8 million grant from the National Institutes of Health will support the research team's refinement of the algorithm and testing on human volunteers.

The algorithm is unique, Wang said, because it utilizes a technique called machine learning. He and doctoral student Yuxuan Wang are training the algorithm to separate speech by exposing it to different words in the midst of background noise. They use a special type of neural network called a "deep neural network" to do the processing—so named because its learning is performed through a deep layered structure inspired by the human brain.

These initial tests focused on pre-recorded sounds. In the future, the researchers will refine the algorithm to make it better able to process speech in real time. They also believe that, as hearing aid electronics continue to shrink and smartphones become even more common, phones will have more than enough processing power to run the and transmit sounds instantly—and wirelessly—to the listener's ears.

Some 10 percent of the population—700 million people worldwide—suffer from hearing loss. The problem increases with age. In a 2006 study, Healy determined that around 40 percent of people in their 80s experience hearing loss that is severe enough to make others' at least partially unintelligible.

One of them is Wang's mother, who, like most people with her condition, has difficulty filtering out .

"She's been one of my primary motivations," Wang said. "When I go visit her, she insists that only one person at a time talk at the dinner table. If more than one person talks at the same time, she goes absolutely bananas because she just can't understand. She's tried all sorts of hearing aids, and none of them works for this problem."

"This is the first time anyone in the entire field has demonstrated a solution," he continued. "We believe that this is a breakthrough in the true sense of the word."

The technology is currently being commercialized and is available for license from Ohio State's Technology Commercialization and Knowledge Transfer Office.

Explore further: Noises off: The machine that rubs out noise (w/ Video)

Related Stories

Hearing the words beneath the noise

Aug 05, 2009

Hearing aids and cochlear implants act as tiny amplifiers so the deaf and hard-of-hearing can make sense of voices and music. Unfortunately, these devices also amplify background sound, so they're less effective in a noisy ...

Recommended for you

User comments : 8

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
2.6 / 5 (5) Nov 18, 2013
how would the device decide whose voice it tunes into?
Sinister1811
1 / 5 (2) Nov 18, 2013
I tried to play the sound file, but I couldn't hear it.

Irony much?
beleg
1 / 5 (3) Nov 18, 2013
This might help:
http://scitation.....4820893

Labeled binary masking.
Looking for free full text access...
One look at the algorithm can answer your question.

Without looking at the algorithm only a guess is possible.
Using the analogy of fingerprints voices are identifiable too.
Masking for a specific voice is acquired through having the algorithm process that voice and once masking is built for that voice from input, storing that unique masking and voice.

Still, we all need a glance at the algorithm without fee or charge.
The secret remains a secret - circuit and software implementation.
beleg
1 / 5 (3) Nov 18, 2013
Here a not-so-quite-as-effective algorithm but on the heels of the more successful algorithm reported above.
http://ecs.utdall...pt09.pdf
beleg
1.7 / 5 (6) Nov 18, 2013
.@Sinister
Irony or acquire their aid to hear the file.
j/k
beleg
1 / 5 (2) Nov 18, 2013
@NOM
Reporting your abuse is futile. I did anyway.
maco
1 / 5 (2) Nov 18, 2013
as I understand, with government help a company got patents on algorithms, algorithms that in the public domain could help thousands of times more people than being tied into one company. so me, this is so wrong on so many levels.
geokstr
1 / 5 (3) Nov 19, 2013
What we really need is not only a device to make us hear better, but one that can make us "listen" better too.

More news stories

Growing app industry has developers racing to keep up

Smartphone application developers say they are challenged by the glut of apps as well as the need to update their software to keep up with evolving phone technology, making creative pricing strategies essential to finding ...

Making graphene in your kitchen

Graphene has been touted as a wonder material—the world's thinnest substance, but super-strong. Now scientists say it is so easy to make you could produce some in your kitchen.