Noises off: The machine that rubs out noise (w/ Video)

Oct 03, 2013
Noises off: The machine that rubs out noise (w/ Video)

Future hearing aids could be adjusted by the wearer to remove background noise using new technology that could also be used to clean up and search YouTube videos.

A noisy restaurant, a busy road, a windy day – all situations that can be intensely frustrating for the hearing impaired when trying to pick out speech in a noisy environment. Some 10 million people in the UK suffer from hearing difficulties and, as helpful as are, those who wear them often complain that continues to be a problem.

What if hearing device wearers could choose to filter out all the troublesome sounds and focus on the voices they want to hear? Engineer Dr Richard Turner believes that this is fast becoming a possibility. He is developing a system that identifies the corrupting noise and "rubs it out".

"The poor performance of current in noise is a major reason why six million people in the UK who would benefit from a hearing aid do not use them," he said. Moreover, as the population ages, a greater number of people will be hindered by the inability to hear clearly. In addition, patients fitted with cochlear implants – devices implanted into the brain to help those whose auditory hair cells have died – suffer from similar limitations.

The solution lies in the statistics of sound, as Turner explained: "Many interfering noises are immediately recognisable. Raindrops patter on a surface, a fire crackles, talkers babble at a party and the wind howls. But what makes these so-called auditory textures sound the way they do? No two rain sounds are identical because the precise arrangement of falling water droplets is never repeated. Nonetheless, there must be a statistical similarity in the sounds compared with say the crackle of a fire.

This video is not supported by your browser at this time.

"For this reason, we think the brain groups together different aspects of sounds using prior experience of their characteristic statistical structure. We can model this mathematically using a form of statistical reasoning called Bayesian inference and then develop computer algorithms that mimic what the brain is doing."

The mathematical system that he and colleagues have developed is capable of being "trained" – a process that uses new methods from the field of machine learning – so that it can recognise sounds. "Rather surprisingly, it seems that a relatively small set of statistics is sufficient to describe a large number of sounds."

Crucially, the system is capable of telling the difference between speech and audio textures. "What we can now do in an adaptive way is to remove background noise and pass these cleaned up sounds to a listener to improve their perception in a difficult environment," said Turner, who is working with hearing experts Professor Brian Moore at the Department of Experimental Psychology and Dr Robert Carlyon at the Medical Research Council Cognition and Brain Sciences Unit, with funding from the Engineering and Physical Sciences Research Council.

The idea is that future devices will have several different modes in which they can operate. These might include a mode for travelling in a car or on a train, a mode for environments like a party or a noisy restaurant, a mode for outdoor environments that are windy, and so on. The device might intelligently select an appropriate mode based on the characteristics of the incoming sound. Alternatively, the user could override this and select a processing mode based upon what sorts of noise they wish to erase.

"In a sense we are developing the technology to underpin intelligent hearing devices," he added. "One possibility would be for users to control their device using an interface on a mobile phone through wireless communication. This would allow users to guide the processing as they wish."

Turner anticipates a further two years of simulating the effect of modifications that clean up sound before they start to work with device specialists. "If these preliminary tests go well, then we'll be looking to work with hearing device companies to try to adapt their processing to incorporate these machine learning techniques. If all goes well, we would hope that this technology will be available in consumer devices within 10 years."

Tinnitus sufferers could also benefit from the technology. Plagued by a constant ringing in the ears, people with tinnitus sometimes use environmental sound generators as a distraction. Such generators offer a limited selection of sounds – a babbling brook, waves lapping, leaves rustling – but, with the , "patients could traverse the entire space of audio textures and figure out where in this enormous spectrum is the best sound for relieving their tinnitus," added Turner.

The technology not only holds promise for helping the hearing impaired, but it also has the potential to improve mobile phone communication – anyone who has ever tried to hold a conversation with someone phoning from a crowded room will recognise the possible benefits of such a facility.

Moreover, with 100 hours of video now being uploaded to YouTube every minute, Google has recognised the potential for systems that can recognise audio content and is funding part of Turner's research. "As an example, a YouTube video containing a conversation that takes place by a busyroadside on a windy day could be automatically categorised based on the speech, traffic and wind noises present in the soundtrack, allowing users to search videos for these categories. In addition, the soundtrack could also be made more intelligible by isolating the speech from the noises – one can imagine users being offered the chance to de-noise their video during the upload process.

"We think this new framework will form a foundation of the emerging field of 'machine hearing'. In the future, machine hearing will be standard in a vast range of applications from hearing devices, which is a market worth £18 billion per annum, to audio searching, and from music processing tasks to augmented reality systems. We believe this research project will kick-start this proliferation."

Explore further: Through 3D-printed prosthetic, Illinois students lending a hand in Ecuador

add to favorites email to friend print save as pdf

Related Stories

Recommended for you

Study says upgrading infrastructure could reduce flood damage

Oct 29, 2014

The severe flooding that devastated a wide swath of Colorado last year might have been less destructive if the bridges, roads and other infrastructure had been upgraded or modernized, according to a new study from the University ...

Walk through buildings from your own device

Oct 29, 2014

Would you like to visit The Frick Collection art museum in New York City but can't find the time? No problem. You can take a 3-D virtual tour that will make you feel like you are there, thanks to Yasutaka ...

'Ambulance drone' prototype unveiled in Holland

Oct 28, 2014

A Dutch-based student on Tuesday unveiled a prototype of an "ambulance drone", a flying defibrillator able to reach heart attack victims within precious life-saving minutes.

User comments : 6

Adjust slider to filter visible comments by rank

Display comments: newest first

Jimbaloid
not rated yet Oct 03, 2013
Please help me to understand; If the brain of a listener with normal hearing can filter out this kind of noise, why is this not also true of the brain of a hearing aid user? A theoretically perfect hearing aid would amplify audio to a level that, to the user, is the same as when their hearing was healthy, perhaps applying more gain to some frequencies than others to achieve this. Is it imperfections of the audio amplification or a characteristic of the damage to the ear that prevents the brain from repeating this trick with a hearing aid fitted?
Jimbaloid
not rated yet Oct 03, 2013
You hint at direction. I was already wondering if the problem with hearing aids might indeed be direction information being lost - that the healthy and unobstructed ear is able to let our brain focus our attention to sound arriving from a specific direction, even a position in space. Yet the hearing aid, sitting in the ear and amplifying the sound arriving, is re-emitting it all from a single point to the inner ear and so every sound is travelling in the same direction, removing much of the brains ability to filter out the unwanted noise? In my mind, this fits with the difficulty of someone with healthy hearing understanding speech on a tape recording or a phone conversation against a noisy background, where once again such directional information has been lost.
beleg
1 / 5 (4) Oct 03, 2013
This is noise cancellation. This time around adaptive noise cancellation.

Ears are phase sensitive. Eyes are not (polarization is not possible)
Phase sensitivity to speech is critical. Making the speech sound heard either incomprehensible or understood.
Phase sensitivity is critical to the sense of sound source location as well.
beleg
1 / 5 (5) Oct 03, 2013
Richard Turner is on to something.
Hearing perception is a phase space brain process. A process independent of mass and energy.
The statical treatment of physical events will appear the same in phase space.
And the best goes to Richard Turner.
Jimbaloid
not rated yet Oct 04, 2013
I just would like to add that in asking questions, I've not been trying to suggest that Mr Turner's work isn't significant, or that it isn't a valid solution. Reading the article just got me thinking as to why this problem exists to begin with. Many people on this site are knowledgeable and can provide useful insight. Thanks.
beleg
1 / 5 (2) Oct 04, 2013
"Reading the article just got me thinking as to why this problem exists to begin with. " - J
The health problems of hearing motivates the search for solutions from the science of sound.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.