New way to test self-driving cars could cut 99.9 percent of validation costs

May 24, 2017 by Sue Carney
Credit: University of Michigan

Mobility researchers at the University of Michigan have devised a new way to test autonomous vehicles that bypasses the billions of miles they would need to log for consumers to consider them road-ready.

The process, which was developed using data from more than 25 million miles of real-world driving, can cut the time required to evaluate robotic vehicles' handling of potentially dangerous situations by 300 to 100,000 times. And it could save 99.9 percent of testing time and costs, the researchers say.

They outline the approach in a new white paper published by Mcity, a U-M-led public-private partnership to accelerate advanced mobility vehicles and technologies.

"Even the most advanced and largest-scale efforts to test automated vehicles today fall woefully short of what is needed to thoroughly test these robotic cars," said Huei Peng, director of Mcity and the Roger L. McCarthy Professor of Mechanical Engineering at U-M.

In essence, the new accelerated evaluation process breaks down difficult real-world driving situations into components that can be tested or simulated repeatedly, exposing automated vehicles to a condensed set of the most challenging driving situations. In this way, just 1,000 miles of testing can yield the equivalent of 300,000 to 100 million miles of real-world driving.

While 100 million miles may sound like overkill, it's not nearly enough for researchers to get enough data to certify the safety of a driverless vehicle. That's because the difficult scenarios they need to zero in on are rare. A crash that results in a fatality occurs only once in every 100 million miles of driving.

Yet for consumers to accept driverless vehicles, the researchers say tests will need to prove with 80 percent confidence that they're 90 percent safer than human drivers. To get to that confidence level, test vehicles would need to be driven in simulated or real-world settings for 11 billion miles. But it would take nearly a decade of round-the-clock testing to reach just 2 million miles in typical urban conditions.

Beyond that, fully automated, driverless vehicles will require a very different type of validation than the dummies on crash sleds used for today's cars. Even the questions researchers have to ask are more complicated. Instead of, "What happens in a crash?" they'll need to measure how well they can prevent one from happening.

"Test methods for traditionally driven cars are something like having a doctor take a patient's blood pressure or heart rate, while testing for automated vehicles is more like giving someone an IQ test," said Ding Zhao, assistant research scientist in the U-M Department of Mechanical Engineering and co-author of the new white paper, along with Peng.

To develop the four-step accelerated approach, the U-M researchers analyzed data from 25.2 million miles of real-world driving collected by two U-M Transportation Research Institute projects—Safety Pilot Model Deployment and Integrated Vehicle-Based Safety Systems. Together they involved nearly 3,000 vehicles and volunteers over the course of two years.

From that data, the researchers:

  • Identified events that could contain "meaningful interactions" between an automated vehicle and one driven by a human, and created a simulation that replaced all the uneventful miles with these meaningful interactions.
  • Programmed their simulation to consider human drivers the major threat to automated vehicles and placed human drivers randomly throughout.
  • Conducted mathematical tests to assess the risk and probability of certain outcomes, including crashes, injuries, and near-misses.
  • Interpreted the accelerated results, using a technique called "importance sampling" to learn how the automated would perform, statistically, in everyday driving situations.

The accelerated evaluation process can be performed for different potentially dangerous maneuvers. Researchers evaluated the two most common situations they'd expect to result in serious crashes: an automated car following a human driver and a human driver merging in front of an automated car. The accuracy of the evaluation was determined by conducting and comparing accelerated and real-world simulations. More research is needed involving additional driving situations.

Explore further: Autonomous vehicles cannot be test-driven enough miles to demonstrate their safety, report says

More information: From the Lab to the Street: Solving the Challenge of Accelerating Automated Vehicle Testing: mcity.umich.edu/wp-content/uploads/2017/05/Mcity-White-Paper_Accelerated-AV-Testing.pdf

Mcity: mcity.umich.edu/wp-content/uploads/2017/05/Mcity-White-Paper_Accelerated-AV-Testing.pdf

Related Stories

Peugeot to test driverless cars in Singapore

May 3, 2017

French automaker PSA said Wednesday it was teaming up with nuTonomy to integrate the US startup's software into one of its vehicles for on-road testing of fully autonomous cars in Singapore.

Recommended for you

Google to stop scanning Gmail for ad targeting

June 23, 2017

Google said Friday it would stop scanning the contents of Gmail users' inboxes for ad targeting, moving to end a practice that has fueled privacy concerns since the free email service was launched.

7 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Hyperfuzzy
not rated yet May 24, 2017
With nano speeds, you gonna have to avoid idiots behind the wheels of other cars. The rest is definable.
antialias_physorg
not rated yet May 24, 2017
Well, what they describe is pretty much a standard automated stress test that is used for every other serious software (i.e. one where errors can cause human injury) out there.

Problem is that testing by simulation and testing in a real surrounding are two different critters. You would not believe the types of errors our test center is finding just because it's real humans at the controls, acting in real timeframes and on realistic systems...many of these errors would never be found by automatic testing because they're due to timing, hardware and software interplay coupled with idosyncratic behavior.
Eikka
not rated yet May 25, 2017
Problem is that testing by simulation and testing in a real surrounding are two different critters.


Additional problems are the data gathered for the tests. Different automated cars use different sensor technologies, different sensor placements, and interpret their data differently. How then do you record real-world equivalent information and present it to the car in a virtual format in a way that, when the car's neural network or statistical pattern analyzer etc. hops out into the real world, it isn't completely baffled by what it actually sees out there?

How to avoid teaching a problem, where the computer latches on to some irrelevant artifact in your data? This problem has already been identified in visual recognition algorithms, which often fail completely and start identifying things from mere speckled noise.
TheGhostofOtto1923
not rated yet May 25, 2017
How then do you record real-world equivalent information and present it to the car in a virtual format in a way that, when the car's neural network or statistical pattern analyzer etc. hops out into the real world, it isn't completely baffled by what it actually sees out there?
Oh I know it's just so hard to imagine how they could EVER fix these things and so the tech will obviously never get done.

And yet it's unfolding right now, right before our eyes. Ten years tops for the complete transition.

So what salient info do you think you're missing? Aren't you even a little curious to find out?
winthrom
not rated yet May 25, 2017
I suggest getting dashboard camera data from police departments as a salient input.
Eikka
not rated yet May 27, 2017
And yet it's unfolding right now, right before our eyes. Ten years tops for the complete transition.


Can I call you in ten years to say "ha ha"?
Eikka
not rated yet May 28, 2017
what salient info do you think you're missing?


The autonomus cars are lacking in several fronts:

a) not enough computing capacity (+power budget) to interpret relevant information from sensors
b) false philosophy of AI and intelligence in general is keeping the whole field back
c) unreliable sensor technologies

The computing power of the state of the art autonomous cars is on the level of a gnat, and they need to become several orders of magnitude faster and more efficient. Even Moore's law won't do it in just ten years, and Moore's law is dead.

The AI engineers assume they can just use statistical correlations to identify objects, but this gives the machine no understanding and has proven unreliable in practice, and requires gargantuan datasets.

The engineers are bridging the gap with active sensors such as radars/lidars which interfere with each other and fail due to weather. It's a dead end solution, but it "works" enough to generate the current hype

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.