Automatic building mapping could help emergency responders

Sep 24, 2012 by Larry Hardesty
The prototype sensor included a stripped-down Microsoft Kinect camera (top) and a laser rangefinder (bottom), which looks something like a camera lens seen side-on. Credit: Patrick Gillooly

MIT researchers have built a wearable sensor system that automatically creates a digital map of the environment through which the wearer is moving. The prototype system, described in a paper slated for the Intelligent Robots and Systems conference in Portugal next month, is envisioned as a tool to help emergency responders coordinate disaster response.

In experiments conducted on the MIT campus, a graduate student wearing the wandered the halls, and the sensors wirelessly relayed data to a laptop in a distant conference room. Observers in the conference room were able to track the student's progress on a map that sprang into being as he moved.

Connected to the array of sensors is a handheld pushbutton device that the wearer can use to annotate the map. In the , depressing the button simply designates a particular location as a point of interest. But the researchers envision that emergency responders could use a similar system to add voice or text tags to the map—indicating, say, structural damage or a toxic spill.

"The operational scenario that was envisioned for this was a hazmat situation where people are suited up with the full suit, and they go in and explore an environment," says Maurice Fallon, a research scientist in MIT's Computer Science and Artificial Intelligence Laboratory, and lead author on the new paper. "The current approach would be to textually summarize what they had seen afterward—'I went into this room on the left, I saw this, I went into the next room,' and so on. We want to try to automate that."

Fallon is joined on the paper by professors John Leonard and Seth Teller, of, respectively, the departments of and of and Computer Science (EECS), and EECS grad students Hordur Johannsson and Jonathan Brookshire.

Maurice Fallon, a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, demonstrates how the sensor is worn. Credit: Patrick Gillooly

Shaky aim

The new work builds on previous research on systems that enable robots to map their environments. But adapting the system so that a human could wear it required a number of modifications.

One of the sensors that the system uses is a laser rangefinder, which sweeps a laser beam around a 270-degree arc and measures the time that it takes the light pulses to return. If the rangefinder is level, it can provide very accurate information about the distance of the nearest walls, but a walking human jostles it much more than a rolling robot does. Similarly, sensors in a robot's wheels can provide accurate information about its physical orientation and the distances it covers, but that's missing with humans. And as emergency workers responding to a disaster might have to move among several floors of a building, the system also has to recognize changes in altitude, so it doesn't inadvertently overlay the map of one floor with information about a different one.

So in addition to the rangefinder, the researchers also equipped their sensor platform with a cluster of accelerometers and gyroscopes, a camera, and, in one group of experiments, a barometer (changes in air pressure proved to be a surprisingly good indicator of floor transitions). The gyroscopes could infer when the rangefinder was tilted—information the mapping algorithms could use in interpreting its readings—and the accelerometers provided some information about the wearer's velocity and very good information about changes in altitude.

Adjudicating the data from all the other sensors is the camera. Every few meters, the camera takes a snapshot of its surroundings, and software extracts a couple of hundred visual features from the image—particular patterns of color, or contours, or inferred three-dimensional shapes. Each batch of features is associated with a particular location on the map.

This video is not supported by your browser at this time.

Seeing is believing

If the person wearing the sensors returns to an area that he or she has previously visited, the system's location estimate could be off: For instance, its compensation for the tilt of the rangefinder might not have been perfect, and a wall now looks several feet farther away than it did, or its inference of position from accelerometer data could be off. In such cases, a fresh snapshot and a comparison of the visual features with those already stored can help correct its location estimate.

The prototype of the sensor platform consists of a handful of devices attached to a sheet of hard plastic about the size of an iPad, which is worn on the chest like a backward backpack. The only sensor whose volume can't be reduced significantly is the rangefinder, so in principle, the whole system could be shrunk to about the size of a coffee mug.

Wolfram Burgard, a professor of computer science at the University of Freiburg in Germany, says that the MIT researchers' work is on the general topic of SLAM, or simultaneous localization and mapping. "Originally, this came out as a problem of robotics," Burgard says. "This idea of having a SLAM system that is attached to a human's body, for figuring out where it is, is actually innovative and pretty useful. For first responders, a technology like this one might be highly relevant."

"With a robot, we typically assume that the robot lives in a plane," Burgard continues. "What they definitely tackled is the problem of height and dealing with staircases, as the human walks up and down. The sensors are not always straight, because the body shakes. These are problems that they tackle in their approach, and where it actually goes beyond the standard 2-D SLAM."

Explore further: Creative adaptation of a quadcopter

Related Stories

New technology could make TV more exciting

Feb 02, 2005

Live TV outside broadcasts that combine real action and computer-generated images could become possible for the first time, thanks to camera navigation technology now under development. The work is opening up the prospect of ...

Robot reconnoiters uncharted terrain

Feb 01, 2012

Mobile robots have many uses. They serve as cleaners, carry out inspections and search for survivors of disasters. But often, there is no map to guide them through unknown territory. Researchers have now developed ...

Research team uses robot eye technology to help the blind

May 02, 2012

(Phys.org) -- A research team from Pierre and Marie Curie University in Paris have ported technology originally developed to help robots maneuver in real world environments to Braille enabled devices that help vision impaired ...

Recommended for you

Tesla says decision on battery factory months away

22 hours ago

(AP)—Electric car maker Tesla Motors said Thursday that it is preparing a site near Reno, Nevada, as a possible location for its new battery factory, but is still evaluating other sites.

Comfortable climate indoors with porous glass

Jul 31, 2014

Proper humidity and temperature play a key role in indoor climate. In the future, establishing a comfortable indoor environment may rely on porous glass incorporated into plaster, as this regulates moisture ...

Crash-testing rivets

Jul 31, 2014

Rivets have to reliably hold the chassis of an automobile together – even if there is a crash. Previously, it was difficult to predict with great precision how much load they could tolerate. A more advanced ...

Customized surface inspection

Jul 31, 2014

The quality control of component surfaces is a complex undertaking. Researchers have engineered a high-precision modular inspection system that can be adapted on a customer-specific basis and integrated into ...

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

rsklyar
1 / 5 (1) Sep 25, 2012
But some other MIT professors prefer to steal the nanosensor ideas- https://www.usw.c...activity