Lower-cost navigation system developed for self-driving cars

Lower-cost navigation system developed for self-driving cars
Three Ford Fusion autonomous test vehicles.

A new software system developed at the University of Michigan uses video game technology to help solve one of the most daunting hurdles facing self-driving and automated cars—the high cost of the laser scanners they use to determine their location.

Ryan Wolcott, a U-M doctoral candidate in computer science and engineering, estimates that it could shave thousands of dollars from the cost of these vehicles. The enables them to navigate using a single , delivering the same level of accuracy as laser scanners at a fraction of the cost. His paper detailing the system recently was named best student paper at the Conference on Intelligent Robots and Systems in Chicago.

"The laser scanners used by most in development today cost tens of thousands of dollars, and I thought there must be a cheaper sensor that could do the same job," he said. "Cameras only cost a few dollars each and they're already in a lot of cars. So they were an obvious choice."

His system builds on the used in other self-driving cars that are currently in development, including Google's vehicle. They use three-dimensional laser scanning technology to create a real-time map of their environment, then compare that real-time map to a pre-drawn map stored in the system. By making thousands of comparisons per second, they're able to determine the vehicle's location within a few centimeters.

Wolcott's system uses the same approach, with one crucial difference—his software converts the map data into a three-dimensional picture much like a video game. The car's navigation system can then compare these synthetic pictures with the real-world pictures streaming in from a conventional video camera.

Ryan Eustice, a U-M associate professor of naval architecture and marine engineering who is working with Wolcott on the technology, said one of the key challenges was designing a system that could process a massive amount of video data in real time.

"Visual data takes up much more space than any other kind of data," he said. "So one of the challenges was to build a system that could do that heavy lifting and still deliver an accurate location in ."

To do the job, the team again turned to the world of video games, building a system out of graphics processing technology that's well known to gamers. The system is inexpensive, yet able to make thousands of complex decisions every second.

"When you're able to push the processing work to a graphics processing unit, you're using technology that's mass-produced and widely available," Eustice said. "It's very powerful, but it's also very cost-effective."

The team has successfully tested the system on the streets of downtown Ann Arbor. While they kept the car under manual control for safety, the navigation system successfully provided accurate location information. Further testing is slated for this year at U-M's new M City test facility, set to open this summer.

The system won't completely replace laser scanners, at least for now—they are still needed for other functions like long-range obstacle detection. But the researchers say it's an important step toward building lower-cost navigation systems. Eventually, their research may also help self-driving vehicle technology move past map-based navigation and pave the way to systems that see the road more like humans do.

"Map-based navigation is going to be an important part of the first wave of driverless vehicles, but it does have limitations—you can't drive anywhere that's not on the map," Eustice said. "Putting cameras in cars and exploring what we can do with them is an early step toward cars that have human-level perception."

The camera-based system still faces many of the same hurdles as laser-based navigation, including how to adapt to varying weather conditions and light levels, as well as unexpected changes in the road. But it's a valuable new tool in the still-evolving arsenal of technology that's moving driverless cars toward reality.


Explore further

Google expects public in driverless cars in two to five years

More information: "Visual Localization within LIDAR Maps for Automated Urban Driving:" robots.engin.umich.edu/publica … s/rwolcott-2014a.pdf
Citation: Lower-cost navigation system developed for self-driving cars (2015, January 15) retrieved 18 August 2019 from https://phys.org/news/2015-01-lower-cost-self-driving-cars.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
42 shares

Feedback to editors

User comments

Jan 15, 2015
Cool approach to visualize what is around the car. I think it is a good starting point for understanding what is happening around it. To be able to determine if it is safe to proceed.

Jan 15, 2015
Most of the recent cars that can already drive completely autonomously like those from BMW, Mercedes and Audi already (as seen at CES) use camera`s and other cheaper sensors instead of these expensive rotating laser scanners. Many big companies already sell complete units like this with software integrated for self driving. Honestly. This article is written like 6 years too late.

Jan 16, 2015
Most of the recent cars that can already drive completely autonomously like those from BMW, Mercedes and Audi already (as seen at CES) use camera`s and other cheaper sensors instead of these expensive rotating laser scanners. Many big companies already sell complete units like this with software integrated for self driving. Honestly. This article is written like 6 years too late.

Although not stated, the mention of the need for some heavy processing makes me think: some form of parallax computation between succeeding frames. Is that how these systems from BMW et al. work? Manufacturers have been putting simple cameras on cars for some years, but that's not the same.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more