Autonomous vehicles cannot be test-driven enough miles to demonstrate their safety, report says

April 12, 2016, RAND Corporation

Autonomous vehicles would have to be driven hundreds of millions of miles and, under some scenarios, hundreds of billions of miles to create enough data to clearly demonstrate their safety, according to a new RAND report.

Under even the most-aggressive test driving assumptions, it would take existing fleets of tens and even hundreds of years to log sufficient miles to adequately assess the safety of the vehicles when compared to human-driven vehicles, according to the analysis.

Researchers say the findings suggest that in order to advance autonomous vehicles into daily use, alternative testing methods must be developed to supplement on-the-road testing. Alternative methods might include accelerated testing, virtual testing and simulators, mathematical modeling, scenario testing and pilot studies.

"Our results show that developers of this technology and third-party testers cannot drive their way to safety," said Nidhi Kalra, co-author of the study and a senior scientist at RAND, a nonprofit research organization. "It's going to be nearly impossible for autonomous vehicles to log enough test-driving miles on the road to statistically demonstrate their safety, when compared to the rate at which injuries and fatalities occur in human-controlled cars and trucks."

According to the Center for Disease Control and Prevention, are a leading cause of premature death in the United States and are responsible for over $80 billion annually in medical care and lost productivity due to injuries. . Autonomous vehicles hold enormous potential for managing this crisis and researchers say autonomous vehicles could significantly reduce the number of accidents caused by human error.

According to the National Highway Traffic Safety Administration, more than 90 percent of automobile crashes are caused by human errors such as driving too fast, as well as alcohol impairment, distraction and fatigue. Autonomous vehicles are never drunk, distracted or tired; these factors are involved in 41 percent, 10 percent and 2.5 percent of all fatal crashes, respectively.

However, researchers acknowledge autonomous vehicles may not eliminate all crashes, and the safety of human drivers is a critical benchmark against which to compare the safety of autonomous vehicles.

Although the total number of crashes, injuries and fatalities from human drivers is high, the rate of these failures is low in comparison with the number of miles that people drive. Americans drive nearly 3 trillion miles every year, according to the Bureau of Transportation Statistics. In 2013, there were 2.3 million injuries reported, which is a failure rate of 77 injuries per 100 million miles driven. The related 32,719 fatalities correspond to a failure rate of about 1 fatality per 100 million miles driven.

"The most autonomous miles any developer has logged are about 1.3 million, and that took several years. This is important data, but it does not come close to the level of driving that is needed to calculate safety rates," said Susan M. Paddock, co-author of the study and senior statistician at RAND. "Even if autonomous vehicle fleets are driven 10 million miles, one still would not be able to draw statistical conclusions about and reliability."

Researchers caution that it may not be possible to establish with certainty the reliability of autonomous vehicles prior to making them available for public use. In parallel to creating new testing methods, it is imperative to develop regulations and policies that can evolve with the technology.

Explore further: National crash rate for conventional vehicles higher than crash rate of self-driving cars, report shows

More information: The report, "Driving to Safety: How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?" is available at www.rand.org

Related Stories

Recommended for you

Semimetals are high conductors

March 18, 2019

Researchers in China and at UC Davis have measured high conductivity in very thin layers of niobium arsenide, a type of material called a Weyl semimetal. The material has about three times the conductivity of copper at room ...

13 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

DDBear
5 / 5 (1) Apr 12, 2016
I worked on driverless cars in the 2000s and now I work on all kinds of unmanned systems in aerospace. The industry claims are exaggerated. There is too much legal liability for the manufacturers if the algorithm doesn't behave perfectly. These technologies will be wonderful driver "assist" features for a long time, but we won't see full autonomy until there is some dramatic shift in the legal framework (which could take decades because of longstanding legal traditions). For example if someone is not paying attention and walks into the middle of the street, should the driverless car hit and kill the person, or should it swerve off the cliff and kill the driver? These kind of legal dilemmas are what will keep driverless cars as fancy "assist" systems for a while.
pdavisgenoa
5 / 5 (1) Apr 12, 2016
The NHTSA's ruled that vehicles can be considered their own drivers which suits the purposes of the Federal Motor Vehicle Safety Standards.
The House and Senate have made the astounding decision to work together on legislation for autonomous vehicles.
The auto industry is highly motivated to make autonomous vehicles as safe as possible or risk losing billions in lost revenue.
You say that researchers have declared:
"it may not be possible to establish with certainty the reliability of autonomous vehicles prior to making them available for public use."
Please let us know which technological advance has ever been adopted with a certainty of reliability or safety? This is a ridiculous expectation.
Do you think we should wait on this impossible standard while we lose another $80 billion and 32,000 lives?
I know that's not what you're saying but unless this article is just informational then what is being offered here in the way of solutions that isn't already being pursued?
DDBear
not rated yet Apr 12, 2016
Vehicles can be considered their own drivers = liability for the company that created the drivers.

That's why we need legal liability reform first before fully driverless goes anywhere.

pdavisgenoa, you should answer the dilemma that I wrote earlier "if someone is not paying attention and walks into the middle of the street, should the driverless car hit and kill the person, or should it swerve off the cliff and kill the driver?" and who is liable?
Eikka
not rated yet Apr 12, 2016
Please let us know which technological advance has ever been adopted with a certainty of reliability or safety?


That's like saying if we invented the car today, we shouldn't even think about seatbelts until 60 years later.

Times change.
Eikka
not rated yet Apr 12, 2016
Do you think we should wait on this impossible standard while we lose another $80 billion and 32,000 lives?


The paradox in your thinking is, that you're simultaneously arguing we don't need to ensure the vehicles are safe, and assuming that they are in order to claim that they -would- save those lives and money. That's just a voodoo argument.

In the lack of any reliable means to test whether the vehicles are safe, how do you know they will work as intended, instead of killing more people and destroying more property?

As the article points out, a 41% reduction in fatalities could be possible simply by installing alcometers in cars that prevent driving if the driver is drunk, although since it's an issue that concerns a minority of drivers, better policing and stricter laws would do.
TheGhostofOtto1923
3.7 / 5 (3) Apr 12, 2016
They can certainly demonstrate they're safer than the average human in a lot fewer miles. At some point in the near future this will become obvious and then the insurance industry will begin heavily penalizing at-risk drivers who do not use them.

The insurance industry only cares about relative risk.

And unlike human drivers, AI will become more dependable from lessons learned.
DDBear
not rated yet Apr 12, 2016
TheGhostofOtto1923 maybe you're onto the long term solution: If a fully autonomous system from company X proves safer than a human driver, perhaps the insurance companies will penalize the driver for not using this system, and in turn, the insurance company accepts all (reduced) liability from (relatively lower) errors caused by the system.
rkolter
5 / 5 (1) Apr 12, 2016
There is so, so much wrong with this article. I wrote until I filled the space. So I wiped it and will summarize.

1 - You do not need to compare entire populations. Use Sampling.
2 - If you compare entire populations the populations do not need to be of identical size.
3 - The two metrics being evaluated have a relationship that is not accounted for - adding automated driven miles reduces human driven miles.
4 - There are a lot of metrics that can be used to judge safe driving beyond fatalities per mile, which essentially rates every possible situation leading to an accident and every possible fatality in an accident as identical. This statistic is fun to throw around, but essentially useless for purposes of comparing safe driving specifically between two vastly different types of drivers.

rkolter
not rated yet Apr 12, 2016
@oog -
1 - I don't have to suggest anything. The article states it is using data that has already been collected and cites the source for the data.
2 - Two populations do not need to be of identical size to compare them and gather meaningful data. There is nothing inherently wrong with comparing information from a set of 3 trillion points and a set of 1.3 million points. But they chose a metric that appears only once in 100 million times, and then said "see there aren't enough data points and never will be". The answer is to choose a different metric to measure safety by.
3 - If you take the 3 trillion miles driven by Americans in 2013, and divide by the recorded driving fatalities in America in 2013, you get a ratio of approximately 1 fatality per 100 million miles driven. This logically assumes each mile and each fatality are roughly identical. I don't understand your disagreement.
rkolter
5 / 5 (1) Apr 12, 2016
In the end, this is a poor article.

-- Choosing a ratio of 1 in 100 million miles for comparison when you know one of your data sets has significantly less data points than that is senseless.
-- Suggesting that this ratio is the ratio that will be used to judge safety, and therefore that the safety of autonomous driving can never be validated, is misleading.
-- Providing a specific data set of miles driven by all Americans in 2013, and comparing it to a data set of all autonomous miles ever driven across several years by a single developer and suggesting these data sets are equal is just bad math.

I am all in favor of autonomous driving. I am willing to change my position if provided evidence that meets some level of rigor. This is not that evidence.

rkolter
not rated yet Apr 12, 2016
And they need to be weather and condition aware, with the ability to gracefully decline to continue (ie pull off the road and stop in an extremely "safe" manner, in a location which is optimal for allowing the human to take over control.) (Picture someone drunk and sleeping at the wheel of an AV when the AV suddenly blares:"Warning! Danger ahead! warning! reverting to human control in 5, 4, 3,..")


Had to respond to this one. I agree the AI Driver of the future will have to be able to handle weather conditions at least as well as a human.

But given your example for TODAY - If someone is drunk and falls asleep at the wheel, the car WILL CRASH. If someone is drunk and falls asleep at the wheel of their AI driven car, and is then startled awake to take control, the car MAY CRASH. If those are my only two choices, it is still safer to let the drunk driver sleep it off in the AI driven vehicle.

Best option is for the drunk driver to not get in the car.
TheGhostofOtto1923
3 / 5 (2) Apr 12, 2016
Well if that's true then his insurance company will not allow his car to start whether it's AI or not.

And if he opts out of the preemptive auto-shutoff-for-habitual-drunkards option then no insurance company will cover him and he won't be able to get a car, AI or not.

And he'll end up in jail and fired like this asshole.
https://youtu.be/KSIzKhUtZgg

Problem solved.
DDBear
5 / 5 (1) Apr 13, 2016
I'm also familiar with aircraft certification and software safety processes (e.g. DO-178C). Even a tiny change to the software can have catastrophic effects on safety if a logic error slips through by accident. So if a particular driverless car software version beta 1.0 was tested for a million miles, and then it was updated to version 2.0, the testing may need to start over from the beginning, with that software logic configuration frozen, in order to know the true reliability statistics for that version.. Unless the reliability is proven via formal models etc. The driverless cars will rely on highly complex nonlinear technologies such as machine learning on parallel computing architectures which don't fit into formal deterministic models. So there will have to be some acceptance of the uncertainty by the legal/insurance community and legal protection for manufacturers for fully driverless cars to become a reality.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.