Sounding out heart problems automatically

July 11, 2008

Sounding the chest with a cold stethoscope is probably one of the most commonly used diagnostics in the medical room after peering down the back of the throat while the patient says, "Aaaah". But, research published in the inaugural issue of the International Journal of Medical Engineering and Informatics looks set to add an information-age approach to diagnosing heart problems. The technique could circumvent the problem of the failing stethoscope skills of medical graduates and reduce errors of judgment

Listening closely to the sound of the beating heart can reveal a lot about its health. Healthcare workers can identify murmurs, palpitations, and other anomalies quickly and then carry out in-depth tests as appropriate. Now, Samit Ari and Goutam Saha of the Indian Institute of Technology in Kharagpur have developed an analytical method that can automatically classify a much wider range of heart sounds than is possible even by the most skilled stethoscope-wielding physician.

Their approach is based on a mathematical analysis of the sound waves produced by the beating heart known as Empirical Mode Decomposition (EMD). This method breaks down the sounds of each heart cycle into its component parts. This allows them to isolate the sound of interest from background noise, such as the movements of the patient, internal body gurgles, and ambient sounds.

The analysis thus produces a signal based on twenty five different sound qualities and variables, which can then be fed into a computer-based classification system. The classification uses an Artificial Neural Network (ANN) and a Grow and Learn (GAL) network. These are trained with standardized sounds associated with a specific diagnosis.

The team then tested the trained networks using more than 100 different recordings of normal heart sounds, sounds from hearts with a variety of valve problems, and different background noises. They found that the EMD system performs more effectively in all cases than conventional electronic, wavelet-based, approaches to heart sound classification.

A disturbing percentage of medical graduates cannot properly diagnose heart conditions using a stethoscope, the researchers explain, and the poor sensitivity of the human ear to low frequency heart sounds makes this task even more difficult. The automatic classification of heart sounds based on Ari and Saha's technique could remedy these failings.

Source: Inderscience Publishers

Explore further: Are we born racist? Bias expert answers timely questions

Related Stories

Are we born racist? Bias expert answers timely questions

July 28, 2015

Rodolfo Mendoza-Denton, PhD, professor of psychology and Richard & Rhoda Goldman distinguished professor of social sciences at the University of California, Berkeley, recently co-edited a book called Are We Born Racist?: ...

Short wavelength plasmons observed in nanotubes

July 28, 2015

The term "plasmons" might sound like something from the soon-to-be-released new Star Wars movie, but the effects of plasmons have been known about for centuries. Plasmons are collective oscillations of conduction electrons ...

'Streaming sucks,' Neil Young says

July 16, 2015

Folk rock icon Neil Young has vowed to pull his music off streaming sites, complaining that even old cassettes had better sound than the online platforms.

Naming features on Pluto

July 14, 2015

'Here be Dragons…' read the inscriptions of old maps used by early seafaring explorers. Such maps were crude, and often wildly inaccurate.

Is your fear of radiation irrational?

July 14, 2015

Bad Gastein in the Austrian Alps. It's 10am on a Wednesday in early March, cold and snowy – but not in the entrance to the main gallery of what was once a gold mine. Togged out in swimming trunks, flip-flops and a bath ...

Recommended for you

Machine Translates Thoughts into Speech in Real Time

December 21, 2009

(PhysOrg.com) -- By implanting an electrode into the brain of a person with locked-in syndrome, scientists have demonstrated how to wirelessly transmit neural signals to a speech synthesizer. The "thought-to-speech" process ...

Quantum Theory May Explain Wishful Thinking

April 14, 2009

(PhysOrg.com) -- Humans don’t always make the most rational decisions. As studies have shown, even when logic and reasoning point in one direction, sometimes we chose the opposite route, motivated by personal bias or simply ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.