UMass Lowell professor steers ethical debate on self-driving cars

Credit: Steffen Thoma/Public Domain

Should your self-driving car protect you at all costs? Or should it steer you into a ditch - potentially causing serious injury - to avoid hitting a school bus full of children?

Those are the kinds of questions that preoccupy Nicholas Evans, a UMass Lowell assistant professor of philosophy who teaches engineering ethics and studies the ethical dilemmas posed by emerging technologies, including drones and .

"You could program a car to minimize the number of deaths or life-years lost in any situation, but then something counterintuitive happens: When there's a choice between a two-person car and you alone in your self-driving car, the result would be to run you off the road," Evans said. "People are much less likely to buy self-driving vehicles if they think theirs might kill them on purpose and be programmed to do so."

Now Evans has won a three-year, $556,650 National Science Foundation grant to construct ethical answers to questions about autonomous vehicles, translate them into decision-making algorithms for the vehicles and then test the public health effects of those algorithms under different risk scenarios using computer modeling.

He will be working with two fellow UMass Lowell faculty members, Heidi Furey, a lecturer in the Philosophy Department, and Yuanchang Xie, an assistant professor of civil engineering who specializes in transportation engineering. The research team also includes Ryan Jenkins, an assistant professor of philosophy at California Polytechnic State University, and experts in modeling at Gryphon Scientific.

Although the technology of autonomous vehicles is new, the they pose are age-old, such as how to strike the balance between the rights of the individual and the welfare of society as a whole. That's where the philosophers come into the equation.

"The first question is, 'How do we value, and how should we value, lives?' This is a really old problem in engineering ethics," Evans said.

He cited the cost-benefit analysis that Ford Motor Co. performed back in the 1970s, after engineers designing the new Pinto realized that its rear-mounted gas tank increased the risk of fires in rear-end crashes. Ford executives concluded that redesigning or shielding the gas tanks would cost more than payouts in lawsuits, so the company did not change the gas tank design.

Most people place a much higher value on their own lives and those of their loved ones than car manufacturers or juries do, Evans said. At least one economist has proposed a "pay-to-play" model for decision-making by autonomous vehicles, with people who buy more expensive cars getting more self-protection than those who buy bare-bones self-driving cars.

While that offends basic principles of fairness because most people won't be able to afford the cars with better protection, Evans said, "it speaks to some basic belief we have that people in their own cars have a right to be saved, and maybe even saved first."

Understanding how computers "think" - by sorting through thousands of possible scenarios according to programmed rules and then rapidly discarding 99.99 percent of them to arrive at a solution - can help create better algorithms that maintain fairness while also providing a high degree of self-protection, Evans said. For example, the approaching the could be programmed to first discard all options that would harm its own passenger, then sort through the remaining options to find the one that causes least harm to the school bus and its occupants, he said.

Although it's not quite that simple - most people would agree that a minor injury to the 's occupant is worth it to prevent serious injuries to 20 or 30 schoolchildren - it's a good starting point for looking at how much risk is acceptable and under what circumstances, according to Evans.

Evans and his team also will look at other issues, including the role of insurance companies in designing algorithms and the question of how many autonomous vehicles have to be on the road before they reduce the overall number of accidents and improve safety.

The NSF also asked Evans and his team to look at potential cybersecurity issues with autonomous vehicles. Today's cars could be vulnerable to hacking through unsecured Bluetooth and Wi-Fi ports installed for diagnostic purposes, but large-scale hacking of self-driving cars is potentially much more dangerous.

There are also important privacy questions involving the data that an autonomous 's computer collects and stores, including GPS data and visual images from the car's cameras, Evans said.

Explore further

Samsung steps up push into autonomous driving technology

Provided by University of Massachusetts Lowell
Citation: UMass Lowell professor steers ethical debate on self-driving cars (2017, October 5) retrieved 19 October 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Oct 05, 2017
"The first question is, 'How do we value, and how should we value, lives?'

The more fundamental question is, who's "we", and why should they have the say?

The ethical dilemma behind that question is about whether the society has moral authority over the individual, and whether it is right to take this self-determination away from the people and into the hands of a corporation or a government who gets to dictate the ethics - however calculated and formulated - onto the rest of us.

You see, there exists a myth, that there exists a "society" consisting of people who all share the same interests and goals in life, and by identifying what those are we are able to rationally formulate a "best course of action" from which it is "irrational" to deviate.

If there isn't one, then the whole point is moot, because the government couldn't have the authority to dictate what individual people should do when they crash their cars, and by proxy what their cars should do..

Oct 05, 2017
Understanding how computers "think" - by sorting through thousands of possible scenarios according to programmed rules and then rapidly discarding 99.99 percent of them to arrive at a solution

There's a crucial difference with reality.

Replace "possible" with "programmed-in" and you're closer to how computers think. The paragraph misleads one to believe that the computer understands what's happening and is able to actually think ahead, when in reality it will ignore any new scenario that it hasn't been programmed with, or mistakes it for a different scenario and responds wrong.

This is the achille's heel of the self-driving car as they are implemented right now. Each car has to be loaded up with all the possible scenarios, for which they simply lack the memory and processing power. What's worse, what seems very obvious to us isn't to the computer, and we find ourselves adding exceptions and special cases for thousands of stupid accidents -after- they've happened.

Oct 06, 2017
the answer is pretty trivial.
The car should follow the rules and obligations of the road. With some harmless exceptions.
i.e. if someone jumps in front of a car with the right of way. The car should stop only if no harm is caused to the occupants, viceversa, it should steer in to a wall if that someone is on a zebracross (although the latter will probably be avoided a priori by sensors and computer calculations/predictions).

Oct 11, 2017
(although the latter will probably be avoided a priori by sensors and computer calculations/predictions).

Although relying on external sensors and help to make the task easier for the car's computers presents the unintended consequence that the failure of said sensors can lead all the cars careening into walls for absolutely no reason - just because the system has hallucinated a person crossing the street.

Or because someone had made the system believe so.

Imagine for example if a hacker could make all the Google cars believe that they're actually driving three feet to the left of their actual position by carefully tweaking the 3D scanner database that they use to locate themselves on the roadtop. Such as by moving the buildings around subtly, so that the cars misjudge their location on the actual road when they reach a specific spot. Once the hacked database is pushed downstream, a whole highway of cars will mysteriously plunge themselves off a bridge.

Oct 11, 2017
The problem is that the car has to tolerate some disrepancies between its internal model of the world, and what it measures to be the external world, since billboards get erected, trees gain or lose leaves, hedges grow larger, crowds appear and dissapear etc. etc.

So the self-driving car like Google's car judges its position on a best-fit basis. It guesses where it is by what it presumes to be static objects around it - but what the static objects are is decidedy by human operators reviewing the data elsewhere, since the car has no understanding or judgement of its own. It's a blind man following a rope - if something gets in the way, he stops, and if the rope leads him off a cliff, he falls.

These programmers are the person inside the desk of the mechanical turk: they are the I to the AI. As such, the car cannot even be programmed to make subtle ethical distinctions about preserving the lives of passengers or pedestrians - it doesn't even know what a pedestrian is!

Oct 11, 2017
"Nicholas Evans, a UMass Lowell asst prof of philostoff"

-whoa hold it right there

"who teaches engineering ethics" ...? WTF is that? Let's do a search... oh I see they actually have a wiki page...

"following is an example from the American Society of Civil Engineers:
1. Engineers shall hold paramount the safety, health and welfare of the public and shall strive to comply with the principles of sustainable development in the performance of their professional duties."

-followed by 6 more such 'general principles of the codes of ethics', all of which are already covered by laws and codes and such.

But 'Sustainable development... ? I am smelling rat feces...

"Now Evans has won a three-year, $556,650 National Science Foundation grant to construct ethical answers..."

-Aha! A nest. And they've gotten into the pantry. Oh dear.

Oct 11, 2017
AI is intrinsically more ethical as it removes humans from the loop. AI cars are more ethical as they will reduce accidents while systematically learning from those which do happen and improving their performance in response.

We don't need hungry philos desperate for relevancy to be concocting unanswerable conundrums and clouding and retarding the process.

Not with our time, not with our money.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more