Self-driving cars can't be perfectly safe – what's good enough? 3 questions answered

March 27, 2018 by Nicholas G. Evans, The Conversation
Is it going to stop? Credit: marat marihal/

On March 19, an Uber self-driving vehicle being tested in Arizona struck and killed Elaine Herzberg, who was walking her bike across the street. This is the first time a self-driving vehicle has killed a pedestrian,and it raises questions about the ethics of developing and testingemerging technologies. Some answers will need to wait until the full investigation is complete. Even so, Nicholas Evans, a philosophy professor at the University of Massachusetts-Lowell who studies the ethics of autonomous vehicles' decision-making processes, says some questions can be answered now.

1. Could a human driver have avoided this crash?

Probably so. It's easy to think that most would have trouble seeing a pedestrian crossing a at night. But what's already clear about this particular event is that the road was not as dark as the local police chief initially claimed.

The chief also originally said Herzberg suddenly stepped out into traffic in front of the car. However, the disturbing and alarming video footage released by Uber and local authorities shows this isn't true: Rather, Herzberg had already walked across one lane of the two-lane road, and was in the process of continuing the road-crossing when the Uber hit her. (The safety driver also didn't notice the pedestrian, but video suggests the driver was looking down, not through the windshield.)

A normal human driver, someone actively paying attention to the road, would likely have had little problem avoiding Herzberg: With headlights on while traveling 40 mph on an actually dark road, it's not difficult to avoid obstacles on a straightaway when they're 100 or more feet ahead, including people or wildlife trying to get across. This crash was avoidable.

One tragic implication of that fact is clear: A self-driving car killed a person. But there is a public significance too. At least this one Uber car drove itself on populated streets while unable to perform the crucial safety task of detecting a pedestrian, and braking or steering so as not to hit the person.

In the wake of Herzberg's death, the safety and reliability of Uber's self-driving cars has come into question. It's also worth examining the ethics: Just as Uber has been criticized for exploiting its drivers for profits, the company may arguably be exploiting the driving, riding and walking public for its own research purposes.

2. Even if this crash was avoidable, are self-driving cars still generally safer than human-driven cars?

Not yet. The death toll on U.S. roads is indeed alarming: approximately 32,000 deaths per year. The federal estimate is that 1.18 people die per 100 million road miles driven by humans. Uber's cars only drove 3 million miles, however, before their first fatality. It's not fair to do statistical analysis from a single point of data, but it's not a great start: Companies should be aiming to make their robots at least as good as humans, if not yet fulfilling the promise of being significantly better.

Even if Uber's autonomous cars were better drivers, the numbers don't tell the whole story. Of the 32,000 people who die on U.S. roads each year, 5,000 to 6,000 are pedestrians. When aiming for safety improvements, should the goal be to reduce overall deaths – or to put special emphasis on protecting the most vulnerable victims? It's certainly hypothetically possible to imagine a self-driving car system that cuts overall road deaths in half – to 16,000 – while doubling the pedestrian death rate – to 12,000. Overall, that might seem far better than human drivers – but not from the perspective of people walking along the nation's roads!

My research group has been working to develop ethical decision frameworks for self-driving cars. One potential approach is called "maximin." Most fundamentally, that way of thinking suggests people designing – both physically and in terms of software that runs them – should identify the worst possible outcomes of any decision, even if rare, and work to minimize their effects. Anyone who has been unfortunate enough to be hit by a car both as a pedestrian and while in a vehicle knows that being on foot is far worse. Under maximin, people should design and test cars, among other things, to prioritize pedestrian safety.

Maximin probably isn't the best possible – and certainly isn't the only – moral decision theory to use. In some cases, the worst outcome could be avoided if a car never pulls out of its driveway! But maximin provides food for thought about how to integrate self-driving cars into daily life. Even if autonomous cars are always evaluated as safer than humans, what counts as "safer" matters very much.

3. How much better should self-driving cars be than humans before the public accepts them?

Even if people could agree on the ways in which self-driving cars should be safer than humans, it's not clear that people should be okay with self-driving cars when they first become only barely better than humans. If anything, that's when tests on city streets should begin.

Consider a new drug developed by a pharmaceutical company. The company can't market it as soon as it's proven not to kill people who take it. Rather, the drug has to go through a series of tests proving it is effective at treating the symptom or condition it's intended to. Increasingly, drug tests seek to prove a medication is significantly better than what's already on the market. People should expect the same with self-driving cars before companies put the public at risk.

The crash in Arizona wasn't just a tragedy. The failure to see a pedestrian in low light was an avoidable basic error for a . Autonomous vehicles should be able to do much more than that before they're allowed to be driven, even in tests, on the open road. Just like pharmaceutical companies, massive technology companies should be required to thoroughly – and ethically – test their systems before their self-driving cars serve or endanger the public.

Explore further: Toyota suspends self-driving car tests after Uber death

Related Stories

Crash marks first death involving fully autonomous vehicle

March 20, 2018

A fatal pedestrian crash involving a self-driving Uber SUV in a Phoenix suburb could have far-reaching consequences for the new technology as automakers and other companies race to be the first with cars that operate on their ...

Recommended for you

Understanding dynamic stall at high speeds

December 18, 2018

When a bird in flight lands, it performs a rapid pitch-up maneuver during the perching process to keep from overshooting the branch or telephone wire. In aerodynamics, that action produces a complex phenomenon known as dynamic ...

Pushing lithium ion batteries to the next performance level

December 13, 2018

Conventional lithium ion batteries, such as those widely used in smartphones and notebooks, have reached performance limits. Materials chemist Freddy Kleitz from the Faculty of Chemistry of the University of Vienna and international ...

Uber filed paperwork for IPO: report

December 8, 2018

Ride-share company Uber quietly filed paperwork this week for its initial public offering, the Wall Street Journal reported late Friday.


Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Mar 27, 2018
1) proving that self-driving cars are -equally- good to the average driver still makes them worse than most drivers, because the probability of accidents isn't evenly distributed among the drivers: the worst drivers are responsible for the most accidents.

It's not impossible that 70-80% of the drivers on road drive "better than average".

2) the question of ethics and moral choice is moot when the cars are still so primitive that they do not even "understand" they're in a situation of moral choice. The car's AI failed to percieve the pedestrian as an obstacle despite having cameras, radars and lidars on-board. Had it detected it as an obstacle, even a simple "don't crash" algorithm would have avoided the death.

The Volvo in question comes with a collision warning radar and an emergency brake system anyways, but it was probably disabled to not interfere with the robot driver.
not rated yet Mar 27, 2018
The problem is that object recognition for the current AIs is still so simple and unreliable, because the AI is not smart. They have to err on the side of false negatives rather than risk false positives and causing accidents by randomly braking and swerving to avoid imaginary impacts.

In fact they have to lean -heavily- on the false negative side to avoid even the low probability of the car suddenly going apeshit for no reason, because even small odds tend to happen eventually.

The AI has to operate on some really dumb rules to make sure it's not doing something spurious. For example, if there's an obstacle on radar but not on camera, it could be a bug in the radar and should be ignored.

Okay, but when the camera has poor light calibration and can't see well enough, and the visual algorithm is erring on the side of false negatives and ignores the flash of a reflector in the darkness, the car will happily ignore the radar saying there's a person standing on the road.
not rated yet Mar 27, 2018
As a motorcyclist, the fact that the AI could not see a bicyclist is somewhat alarming. I for one would appreciate some evidence or technical explanation of how reliable these systems are around motorcycles. So far, It appears to me that the subject has been carefully avoided by all involved. An AI-driven vehicle that reacts unpredictably is a real danger to anyone on two wheels. If I encounter a self-driving car, I will definitely give it a wide berth, assuming the worst. After all, even a small "false positive" maneuver could be fatal for a nearby biker. I recommend that all self-driving cars be well-marked to alert the vulnerable.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.