'Listening' drone helps find victims needing rescue in disasters

December 21, 2017, Tokyo Institute of Technology
Blue circles on the map (top right) indicate the detected sound source locations.https://youtu.be/xsD4saM6vFo Credit: Kumamoto Univiversity, Tokyo Institute of Technology and Waseda University

"Robot audition" is a research area that was proposed by Adjunct Professor Kazuhiro Nakadai of Tokyo Institute of Technology (Tokyo Tech) and Professor Hiroshi G. Okuno of Waseda University in 2000. Until then, robots had not been able to recognize voices unless a microphone was near a person's mouth. Development of "robot ears" began advancing with the idea that robots, like humans, should hear sound with their own ears. The entry barrier for this research area was high, since it involves a combination of signal processing, robotics, and artificial intelligence. However, vigorous activities since its proposal, including the publication of open source software, culminated in its official registration as a research area in 2014 by the IEEE Robotics and Automation Society (RAS), the largest community for robot research.

  1. The three keys for making "robot ears" a reality are
  2. sound source localization technology to estimate where sound is coming from
  3. sound source separation technology to extract the direction from which the sound originates, and
  4. automatic to recognize separated sounds from background noise, similar to how humans can recognize speech from across a noisy lot.

The research team pursued techniques to implement these keys in real environments and in real time. They developed the technology that, like the legendary Japanese Prince Shotoku, could distinguish simultaneous speech from multiple people. They have, among other projects, demonstrated simultaneous meal ordering by 11 people and created a robot game show host that can handle multiple contestants answering simultaneously.

This technology is the result of extreme audition research led by program manager Satoshi Tadokoro of Tohoku University. A system that can detect voices, mobile device sounds and other sounds from through the background noise of a drone has been developed to assist in faster victim recovery.

'Listening' drone helps find victims needing rescue in disasters
The microphone array has with 16 microphones and can be connected by one cable (left). A drone equipped with a microphone array (right). Credit: Kumamoto Univ., Tokyo Institute of Technology and Waseda Univ.

Assistant Professor Taro Suzuki of Waseda University provided the high-accuracy point cloud map data, an outcome of his research on high-performance GPS. The group performing the extreme audition research, Nakadai, Okuno, and Associate Professor Makoto Kumon of Kumamoto University, were central in developing this system, the first of its kind worldwide.

This system is made up of three main technical elements. The first is the technology based on audition called HRI-JP Audition for Robots with Kyoto University (HARK). HARK has been updated every year since its 2008 release, and exceeded 120,000 total downloads as of December 2017. The software was extended to support embedded use while also maintaining its noise robustness. Researchers then embedded this version of HARK on a drone to decrease its weight and take advantage of high-speed data processing. They realized that microphone array processing could be performed inside a microphone array device attached to the drone—it is not necessary to send all of the captured signals to a base station wirelessly. The total data transmission volume was dramatically reduced to less than 1/100. This made it possible to detect sound even through the noise generated by the drone itself.

The second element is a three-dimensional sound source location estimation technology with map display. This made it possible to construct an easily understood visual user interface out of invisible sound sources.

The final element is an all-weather microphone array consisting of 16 microphones all connected by one cable for easy installation on a drone. This makes it possible to perform a search and rescue even in adverse weather.

Credit: Tokyo Institute of Technology

It is generally accepted that survival probability is drastically reduced for victims that are not rescued within the first 72 hours after a disaster. Establishing technology for a swift search and rescue has been a pressing issue.

Most existing technologies using drones to search for disaster victims make use of cameras or similar devices. Not being able to use them when victims are difficult to find or are in areas where cameras are ineffective, such as when victims are buried or are in the dark, has been a major impediment in search and rescue operations. Since this detects sounds made by disaster victims, it may be able to mitigate such problems. It is expected to result in promising tools for rescue teams in the near future, as drones for finding victims needing rescue in disaster areas become widely available.

The research group will continue to work toward improving the system to make it even easier to use and more robust by continuing to perform demonstrations and experiments in simulated disaster conditions. One goal is to add a functionality for classifying sound source types, instead of simply detecting them, so that relevant sound sources from victims can be distinguished from irrelevant sources. Another goal is to develop the system as a package of intelligent sensors that can be connected to various types of drones.

Explore further: HEARBO robot can tell beeps, notes, and spoken word (w/ Video)

Related Stories

HEARBO robot can tell beeps, notes, and spoken word (w/ Video)

November 21, 2012

(Phys.org)—Research seeking to improve the features and functions of robots has made impressive gains with prototypes of robots that can help out in settings that range from assistive living, to hospital care, to inventory-taking ...

Cockroach cyborgs use microphones to detect, trace sounds

November 6, 2014

North Carolina State University researchers have developed technology that allows cyborg cockroaches, or biobots, to pick up sounds with small microphones and seek out the source of the sound. The technology is designed to ...

A skillful rescue robot with remote-control function

December 13, 2016

A group of Japanese researchers developed a prototype construction robot for disaster relief situations. This prototype has drastically improved operability and mobility compared to conventional construction machines.

Drones can almost see in the dark

September 20, 2017

UZH researchers have taught drones how to fly using an eye-inspired camera, opening the door to them performing fast, agile maneuvers and flying in low-light environments. Possible applications could include supporting rescue ...

Robo-dogs to the rescue

March 28, 2016

Scientists in Japan have developed a system using information technology to augment and enhance the capabilities of canine search and rescue (SAR) teams. Outfitted with special suits, these cyber-enhanced SAR dogs can transmit ...

Recommended for you


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.