Coming next in domotics—houses that decipher voice commands

January 26, 2017, CORDIS
Coming next in domotics—houses that decipher voice commands
Credit: Shutterstock

In the race for a smart everything, houses seem to be gathering more attention every year. Voice control is one of the features that high-tech companies are willing to invest in and - while technological solutions are still in their early stages - an EU-funded project is looking to blow their mind by going a step further: voice control spiced up with automated speech recognition.

'This was an impressive demonstration,' says Alina Suhetzki. Only a year-and-a-half after the LISTEN project's kick-off, the EC project officer had the chance to experience the technology and see its potential at the mid-term project review meeting taking place in Heraklion, Greece, in mid-January. Using voice commands in English language, she successfully switched the lights on and off with no delay in execution whatsoever.

This demonstration was a big step in the right direction for LISTEN researchers, whose core ambition is to design and implement a hardware and software environment that enables robust, hands-free, voice-based access to web applications in smart homes. This environment combines a speech capture system operating as a wireless acoustic sensor network (WASN) with an system.

Currently recognising as many as four languages (English, French, Italian and Greek), the novel system allows users to switch various smart appliances on and off, but also to perform regular actions such as web search, email dictation, access to social networks, etc. All this, with no headset or need to speak close to a microphone.

LISTEN brings together research foundation FORTH, RWTH Aachen, the European Media Laboratory (EML) and Italian voice-text and print company Cedat85. The four partners admittedly aim to bridge the gap between the acoustic sensors and the automatic research communities, but also to push the boundaries of current state-of-the-art.

Recently, researchers from project partners FORTH and RWTH Aachen seized an opportunity to shine on the occasion of the 4th CHIME Speech Separation and Recognition Challenge. LISTEN technology came second in all three different tracks of the challenge using 1, 2, or 6 microphones respectively for capturing speech signals. The challenge welcomed a total of 15 participants.

'We are very proud about these results that clearly indicate the quality of our team, and the potential of LISTEN to develop innovative research and attract the attention of the research community,' Prof. Athanasios Mouchtaris, LISTEN project coordinator and affiliated Researcher at FORTH, said after the challenge.

Explore further: MSI shows voice-controlled motherboard approach at IDF

More information: Project website:

Related Stories

MSI shows voice-controlled motherboard approach at IDF

September 19, 2011

( -- Micro-Star International (MSI) revealed voice control via motherboard at the Intel Developer Conference. The MSI demo showed how its add-on PCIe x1 card can add voice control to selected Sandy Bridge motherboards ...

Talking to smart homes to improve quality of life

January 11, 2005

Telling your house to turn on the lights or record a TV programme may be the ultimate high-tech luxury, but for elderly and disabled people voice-operated smart homes could dramatically improve quality of life. INSPIRE has ...

Recommended for you

World's biggest battery in Australia to trump Musk's

March 16, 2018

British billionaire businessman Sanjeev Gupta will built the world's biggest battery in South Australia, officials said Friday, overtaking US star entrepreneur Elon Musk's project in the same state last year.

1 in 3 Michigan workers tested opened fake 'phishing' email

March 16, 2018

Michigan auditors who conducted a fake "phishing" attack on 5,000 randomly selected state employees said Friday that nearly one-third opened the email, a quarter clicked on the link and almost one-fifth entered their user ...

Origami-inspired self-locking foldable robotic arm

March 15, 2018

A research team of Seoul National University led by Professor Kyu-Jin Cho has developed an origami-inspired robotic arm that is foldable, self-assembling and also highly-rigid. (The researchers include Suk-Jun Kim, Dae-Young ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.