Lower-cost navigation system developed for self-driving cars

January 15, 2015 by Gabe Cherry
Three Ford Fusion autonomous test vehicles.

A new software system developed at the University of Michigan uses video game technology to help solve one of the most daunting hurdles facing self-driving and automated cars—the high cost of the laser scanners they use to determine their location.

Ryan Wolcott, a U-M doctoral candidate in computer science and engineering, estimates that it could shave thousands of dollars from the cost of these vehicles. The enables them to navigate using a single , delivering the same level of accuracy as laser scanners at a fraction of the cost. His paper detailing the system recently was named best student paper at the Conference on Intelligent Robots and Systems in Chicago.

"The laser scanners used by most in development today cost tens of thousands of dollars, and I thought there must be a cheaper sensor that could do the same job," he said. "Cameras only cost a few dollars each and they're already in a lot of cars. So they were an obvious choice."

His system builds on the used in other self-driving cars that are currently in development, including Google's vehicle. They use three-dimensional laser scanning technology to create a real-time map of their environment, then compare that real-time map to a pre-drawn map stored in the system. By making thousands of comparisons per second, they're able to determine the vehicle's location within a few centimeters.

Wolcott's system uses the same approach, with one crucial difference—his software converts the map data into a three-dimensional picture much like a video game. The car's navigation system can then compare these synthetic pictures with the real-world pictures streaming in from a conventional video camera.

Ryan Eustice, a U-M associate professor of naval architecture and marine engineering who is working with Wolcott on the technology, said one of the key challenges was designing a system that could process a massive amount of video data in real time.

"Visual data takes up much more space than any other kind of data," he said. "So one of the challenges was to build a system that could do that heavy lifting and still deliver an accurate location in ."

To do the job, the team again turned to the world of video games, building a system out of graphics processing technology that's well known to gamers. The system is inexpensive, yet able to make thousands of complex decisions every second.

"When you're able to push the processing work to a graphics processing unit, you're using technology that's mass-produced and widely available," Eustice said. "It's very powerful, but it's also very cost-effective."

The team has successfully tested the system on the streets of downtown Ann Arbor. While they kept the car under manual control for safety, the navigation system successfully provided accurate location information. Further testing is slated for this year at U-M's new M City test facility, set to open this summer.

The system won't completely replace laser scanners, at least for now—they are still needed for other functions like long-range obstacle detection. But the researchers say it's an important step toward building lower-cost navigation systems. Eventually, their research may also help self-driving vehicle technology move past map-based navigation and pave the way to systems that see the road more like humans do.

"Map-based navigation is going to be an important part of the first wave of driverless vehicles, but it does have limitations—you can't drive anywhere that's not on the map," Eustice said. "Putting cameras in cars and exploring what we can do with them is an early step toward cars that have human-level perception."

The camera-based system still faces many of the same hurdles as laser-based navigation, including how to adapt to varying weather conditions and light levels, as well as unexpected changes in the road. But it's a valuable new tool in the still-evolving arsenal of technology that's moving driverless cars toward reality.

Explore further: Google expects public in driverless cars in two to five years

More information: "Visual Localization within LIDAR Maps for Automated Urban Driving:" robots.engin.umich.edu/publica … s/rwolcott-2014a.pdf

Related Stories

First series production vehicle with software control

December 19, 2014

Siemens has unveiled the first electric series production vehicle with the central electronics and software architecture RACE. This technology, developed in the research project of the same name, replaces the entire control ...

Self-driving cars could be the answer to congested roads

November 24, 2014

If cars with drivers still suffer under gridlock conditions on roads, how will driverless cars fare any better? With greater computerisation and network awareness, driverless cars may be the answer to growing traffic congestion.

Nokia HERE prepares maps for autonomous cars

December 17, 2014

Autonomous cars will need a new kind of map, a crucial element that until now has been given a back seat to the more popularly discussed issues of sensors and legal questions. Senior Writer Greg Miller in Wired put maps back ...

Recommended for you

When words, structured data are placed on single canvas

October 22, 2017

If "ugh" is your favorite word to describe entering, amending and correcting data on the rows and columns on spreadsheets you are not alone. Coda, a new name in the document business, feels it's time for a change. This is ...

Enhancing solar power with diatoms

October 20, 2017

Diatoms, a kind of algae that reproduces prodigiously, have been called "the jewels of the sea" for their ability to manipulate light. Now, researchers hope to harness that property to boost solar technology.

3 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

PPihkala
not rated yet Jan 15, 2015
Cool approach to visualize what is around the car. I think it is a good starting point for understanding what is happening around it. To be able to determine if it is safe to proceed.
maxb500_live_nl
not rated yet Jan 15, 2015
Most of the recent cars that can already drive completely autonomously like those from BMW, Mercedes and Audi already (as seen at CES) use camera`s and other cheaper sensors instead of these expensive rotating laser scanners. Many big companies already sell complete units like this with software integrated for self driving. Honestly. This article is written like 6 years too late.
alfie_null
not rated yet Jan 16, 2015
Most of the recent cars that can already drive completely autonomously like those from BMW, Mercedes and Audi already (as seen at CES) use camera`s and other cheaper sensors instead of these expensive rotating laser scanners. Many big companies already sell complete units like this with software integrated for self driving. Honestly. This article is written like 6 years too late.

Although not stated, the mention of the need for some heavy processing makes me think: some form of parallax computation between succeeding frames. Is that how these systems from BMW et al. work? Manufacturers have been putting simple cameras on cars for some years, but that's not the same.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.