New algorithm improves robot vision

Dec 07, 2005
robot

Except in fanciful movies like 2003's The Matrix Revolutions, where fearsome squid-like robots maneuvered with incredible ease, most robots are too clumsy to move around obstacles at high speeds. This is true in large part because they have trouble judging in the images they "see" just how far ahead obstacles are. This week, however, Stanford computer scientists will unveil a machine vision algorithm that gives robots the ability to approximate distances from single still images.

"Many people have said that depth estimation from a single monocular image is impossible," says computer science Assistant Professor Andrew Ng, who will present a paper on his research at the Neural Information Processing Systems Conference in Vancouver Dec. 5-8. "I think this work shows that in practical problems, monocular depth estimation not only works well, but can also be very useful."

With substantial sensor arrays and considerable investment, robots are gaining the ability to navigate adequately. Stanley, the Stanford robot car that drove a desert course in the DARPA Grand Challenge this past October, used lasers and radar as well as a video camera to scan the road ahead. Using the work of Ng and his students, robots that are too small to carry many sensors or that must be built cheaply could navigate with just one video camera. In fact, using a simplified version of the algorithm, Ng has enabled a radio-controlled car to drive autonomously for several minutes through a cluttered, wooded area before crashing.

Inferring depth

To give robots depth perception, Ng and graduate students Ashutosh Saxena and Sung H. Chung designed software capable of learning to spot certain depth cues in still images. The cues include variations in texture (surfaces that appear detailed are more likely to be close), edges (lines that appear to be converging, such as the sides of a path, indicate increasing distance) and haze (objects that appear hazy are likely farther).

To analyze such cues as thoroughly as possible, the software breaks images into sections and analyzes them both individually and in relationship to neighboring sections. This allows the software to infer how objects in the image appear relative to each other. The software also looks for cues in the image at varying levels of magnification to ensure that it doesn't miss details or prevailing trends—literally missing the forest for the trees.

Using the Stanford algorithm, robots were able to judge distances in indoor and outdoor locations with an average error of about 35 percent—in other words, a tree that is actually 30 feet away would be perceived as being between 20 and 40 feet away. A robot moving at 20 miles per hour and judging distances from video frames 10 times a second has ample time to adjust its path even with this uncertainty. Ng points out that compared to traditional stereo vision algorithms—ones that use two cameras and triangulation to infer depth—the new software was able to reliably detect obstacles five to 10 times farther away.

"The difficulty of getting visual depth perception to work at large distances has been a major barrier to getting robots to move and to navigate at high speeds," Ng says. "I'd like to build an aircraft that can fly through a forest, flying under the tree canopy and dodging around trees." Of course, that brings to mind another movie image: that of the airborne chase scene through the forest on the Ewok planet in Return of the Jedi. Ng wants to take that idea out of the realm of fiction and make it a reality.

Source: Stanford University

Explore further: A platform to help consumers achieve sustainable energy consumption

add to favorites email to friend print save as pdf

Related Stories

Construction to begin on 2016 NASA Mars lander

May 20, 2014

(Phys.org) —NASA and its international partners now have the go-ahead to begin construction on a new Mars lander, after it completed a successful Mission Critical Design Review on Friday.

Proposed Mars 'Icebreaker' mission detailed

Apr 18, 2014

Scientists supported by the Astrobiology Technology for Exploring Planets (ASTEP) and Astrobiology Instrument Development Programs (ASTID) have outlined the proposed 'Icebreaker' mission to Mars in a recent ...

Autosub Long Range ready to cast off

Mar 21, 2014

Autosub Long Range, a state-of-the-art autonomous underwater vehicle developed by the National Oceanography Centre, is about to be launched for a 30-day scientific expedition off the coast of Donegal in Ireland.

Recommended for you

Google to help boost Greece's tourism industry

10 hours ago

Internet giant Google will offer management courses to 3,000 tourism businesses on the island of Crete as part of an initiative to promote the sector in Greece, industry union Sete said on Thursday.

Enabling a new future for cloud computing

11 hours ago

The National Science Foundation (NSF) today announced two $10 million projects to create cloud computing testbeds—to be called "Chameleon" and "CloudLab"—that will enable the academic research community ...

Hitchhiking robot reaches journey's end in Canada

15 hours ago

A chatty robot with an LED-lit smiley face sent hitchhiking across Canada this summer as part of a social experiment reached its final destination Thursday after several thousand kilometers on the road.

Microsoft to unveil new Windows software

15 hours ago

A news report out Thursday indicated that Microsoft is poised to give the world a glimpse at a new-generation computer operating system that will succeed Windows 8.

User comments : 0