New technology enhances speech perception

July 30, 2018, Aalborg University

Future hearing aid users will be able to target their listening more accurately thanks to new Danish technology. A researcher from Aalborg University uses machine learning to teach a computer programme how to remove unwanted noise and enhance speech.

One of the main challenges for people with loss is understanding in noisy surroundings. The problem is referred to as the cocktail party effect because situations where many people are talking at the same time often make it very hard to distinguish what is being said by the individual you are talking to.

Even though most modern hearing aids incorporate various forms of speech enhancement technology, engineers are still struggling to develop a system that makes a significant improvement.

Ph.D. student Mathew Kavalekalam from the Audio Lab Analysis at Aalborg University is using machine learning to develop an algorithm that enables a to distinguish between spoken words and . The project is done in conjunction with hearing aid researchers from GN Advanced Science and is supported by Innovation Fund Denmark.

Computer listens and learns

"The hearing centre inside our brains usually performs a string of wildly complicated calculations that enables us to focus on a single voice – even if there are many other people talking in the background," explains Mathew Kavalekalam, Aalborg University. "But that ability is very difficult to recreate in a machine."

Mathew Kavalekalam started out with a digital model that describes how speech is produced in a human body, from the lungs via throat and larynx, mouth and nasal cavities, teeth, lips, etc.

He used the model to describe the type of signal that a computer should 'listen' for when trying to identify a talking voice. He then told the computer to start listening and learning.

Noise isn't just noise

"Background differs depending on the environment, from street or traffic noise if you are outside to the noise of people talking in a pub or a cafeteria," Mathew Kavalekalam says. "That is one of the many reasons why it is so tricky to build a model for speech enhancement that filters the speech you want to hear from the babbling you are not interested in."

At Aalborg University Mathew Kavalekalam played back various recordings of voices talking to the computer and gradually added different types of background noise at an increasing level.

By applying this machine learning, the computer software developed a way of recognising the sound patterns and calculating how to enhance the particular sound of talking voices and not the background noise.

Fifteen percent improvement

The result of Kavalekalam's work is a piece of software that can effectively help people with better understand speech. It is able to identify and enhance spoken words even in very noisy surroundings.

So far the model has been tested on ten people who have been comparing speech and background noise with and without the use of Kavalekalam's algorithm.

The test subjects were asked to perform simple tasks involving colour, numbers and letters that were described to them in noisy environments.

The results indicate that Kavalekalam may well have developed a promising solution. Test subjects' speech perception improved by fifteen percent in very noisy surroundings.

Snappy signal processing

However, there is still some work to be done before Mathew Kavalekalam's software finds its way into new hearing aids. The technology needs to be tweaked and tuned before it is practically applicable.

The algorithm needs to be optimized to take up less processing power. Even though technology keeps getting faster and more powerful, there are hardware limitations in small, modern hearing aids.

"When it comes to speech enhancement, signal processing needs to be really snappy. If the sound is delayed in the hearing aid, it gets out of sync with the mouth movements and that will end up making you even more confused," explains Mathew Kavalekalam.

Explore further: Ability to process speech declines with age

Related Stories

Ability to process speech declines with age

October 5, 2016

Researchers have found clues to the causes of age-related hearing loss. The ability to track and understand speech in both quiet and noisy environments deteriorates due in part to speech processing declines in both the midbrain ...

Siemens introduces smart hearing aids

January 6, 2015

At the 2015 International CES, Siemens is unveiling smart hearing aids—their latest in wearable hearing technology. The hearing aids can be discreetly controlled via both iPhone and Android devices, with latest models clinically ...

Study reveals potential breakthrough in hearing technology

November 18, 2013

Computer engineers and hearing scientists at The Ohio State University have made a potential breakthrough in solving a 50-year-old problem in hearing technology: how to help the hearing-impaired understand speech in the midst ...

New method enables high quality speech separation

June 5, 2018

People have a natural knack for focusing on what a single person is saying, even when there are competing conversations in the background or other distracting sounds. For instance, people can often make out what is being ...

Recommended for you

Coffee-based colloids for direct solar absorption

March 22, 2019

Solar energy is one of the most promising resources to help reduce fossil fuel consumption and mitigate greenhouse gas emissions to power a sustainable future. Devices presently in use to convert solar energy into thermal ...

EPA adviser is promoting harmful ideas, scientists say

March 22, 2019

The Trump administration's reliance on industry-funded environmental specialists is again coming under fire, this time by researchers who say that Louis Anthony "Tony" Cox Jr., who leads a key Environmental Protection Agency ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.