Driverless cars: Who gets protected? Study shows public deploys inconsistent ethics on safety issue

June 23, 2016, Massachusetts Institute of Technology
The finalized prototype of Google self-driving car.

Driverless cars pose a quandary when it comes to safety. These autonomous vehicles are programmed with a set of safety rules, and it is not hard to construct a scenario in which those rules come into conflict with each other. Suppose a driverless car must either hit a pedestrian or swerve in such a way that it crashes and harms its passengers. What should it be instructed to do?

A newly published study co-authored by an MIT professor shows that the public is conflicted over such scenarios, taking a notably inconsistent approach to the safety of autonomous vehicles, should they become a reality on the roads.

In a series of surveys taken last year, the researchers found that people generally take a utilitarian approach to ethics: They would prefer autonomous vehicles to minimize casualties in situations of extreme danger. That would mean, say, having a car with one rider swerve off the road and crash to avoid a crowd of 10 pedestrians. At the same time, the survey's respondents said, they would be much less likely to use a vehicle programmed that way.

Essentially, people want that are as pedestrian-friendly as possible—except for the vehicles they would be riding in.

"Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study. "But everybody want their own car to protect them at all costs."

The result is what the researchers call a "social dilemma," in which people could end up making conditions less safe for everyone by acting in their own self-interest.

Trolly problem of self-driving car. Credit: Iyad Rahwan
"If everybody does that, then we would end up in a tragedy ... whereby the cars will not minimize casualties," Rahwan adds.

Or, as the researchers write in the new paper, "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."

The paper, "The social dilemma of autonomous vehicles," is being published in the journal Science. The authors are Jean-Francois Bonnefon of the Toulouse School of Economics; Azim Shariff, an assistant professor of psychology at the University of Oregon; and Rahwan, the AT&T Career Development Professor and an associate professor of media arts and sciences at the MIT Media Lab.

Survey says

The researchers conducted six surveys, using the online Mechanical Turk public-opinion tool, between June 2015 and November 2015.

The results consistently showed that people will take a utilitarian approach to the of autonomous vehicles, one emphasizing the sheer number of lives that could be saved. For instance, 76 percent of respondents believe it is more moral for an autonomous vehicle, should such a circumstance arise, to sacrifice one passenger rather than 10 pedestrians.

But the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. One question asked respondents to rate the morality of an autonomous vehicle programmed to crash and kill its own passenger to save 10 pedestrians; the rating dropped by a third when respondents considered the possibility of riding in such a car.

Similarly, people were strongly opposed to the idea of the government regulating driverless cars to ensure they would be programmed with utilitarian principles. In the survey, respondents said they were only one-third as likely to purchase a vehicle regulated this way, as opposed to an unregulated vehicle, which could presumably be programmed in any fashion.

"This is a challenge that should be on the mind of carmakers and regulators alike," the scholars write. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation "may paradoxically increase casualties by postponing the adoption of a safer technology."

Empirically informed

The aggregate performance of autonomous vehicles on a mass scale is, of course, yet to be determined. For now, ethicists say the survey offers interesting and novel data in an area of emerging moral interest.

The researchers, for their part, acknowledge that public-opinion polling on this issue is at a very early stage, which means any current findings "are not guaranteed to persist," as they write in the paper, if the landscape of driverless cars evolves.

Still, concludes Rahwan, "I think it was important to not just have a theoretical discussion of this, but to actually have an empirically informed discussion."

Explore further: When self-driving cars drive the ethical questions

More information: "The social dilemma of autonomous vehicles," Science, DOI: 10.1126/science.aaf2654

The Moral Machine site: moralmachine.mit.edu/

Related Stories

When self-driving cars drive the ethical questions

October 24, 2015

Driverless cars are due to be part of day to day highway travel. Beyond their technologies and safety reports lies a newer wrinkle posed by three researchers, in the form of ethical questions which policy makers and vendors ...

Google testing appropriate honking with self-driving cars

June 3, 2016

(Tech Xplore)—Google has revealed in its latest monthly report on how things are going with its autonomous car testing program, that it has recently been testing the automatic use of the horn by the vehicle. They note that ...

Where we are on the road to driverless cars

November 5, 2015

Who doesn't like the idea of getting in your car, sitting back finishing off your coffee and reading the paper while the vehicle whisks you to your destination? We're not quite there yet, but what is available are technologies ...

Recommended for you

Team breaks world record for fast, accurate AI training

November 7, 2018

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training artificial intelligence (AI) machines faster than ever before while maintaining ...

25 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

BrettC
5 / 5 (2) Jun 23, 2016
In reference to regulation. Imagine being given a preservation program choice on entry to the vehicle. You choose self preservation. The vehicle proceeds to hit and kill a pedestrian, perhaps a child. That death would be a direct result of a choice you made. I would definitely prefer NOT to be responsible for the dynamics of the preservation system. Besides, people get into all kinds of vehicles every day that they do not directly control. Most of the time, they do not know the quality of the drivers skill or if the drivers moral choices mirror their own.
krundoloss
5 / 5 (3) Jun 23, 2016
This is one of those situations where you say "That's not right", yet if you really think about it, it is a crazy situation. When are you ever moving at such speeds that a crash would be deadly, mixed with Pedestrians? Also, a car can only do so many things, such as stop, go, or turn left or right. Instead of this happening in real life when you say "there's nothing you could have done", this is something we are deciding ahead of time. So when those conditions arise, and someone dies, then you know it was because you already decided it ahead of time, and that makes people uneasy. We want to blame situations or people for a death, not programming.
freeiam
5 / 5 (1) Jun 23, 2016
If the 10 people are from IS armed with guns to kill hundreds of people or a group of 99 year old or 10 extremely annoying and criminal kids from the block I would be happy to run them over and not kill myself. So, this is not a game of numbers and not readily programmable at all.
The solution is to let the car drive as careful and predictable as possible and focus on avoiding collisions, but when a collision is unavoidable the car should always protect the driver first.
The point is, if this isn't the case people can commit murder by jumping (with 2 or more) in front of a car near a clif for example.
People will immediately exploit such rules, strange this article doesn't mention that...
freeiam
not rated yet Jun 23, 2016
And please, the 'tragedy of the commons' is a completely failed notion.
krundoloss
5 / 5 (1) Jun 23, 2016
Yeah basically holding someone accountable for what happens, ruins the entire concept of driverless cars. The need to sue someone over an accident makes it impossible if the driver to be held accountable, and the programmers cannot be held accountable, the car maker, on and on it goes. No one wants to be accountable for a machine roaming around at high speeds.
axemaster
5 / 5 (4) Jun 23, 2016
The objectively correct answer here is that the car should act in the self-interest of its passengers.

The rate of cars causing accidents will be negligible compared to the rate of people causing accidents. Therefore, the most important thing is to maximize the rate of uptake of the technology.

Additionally, we want the cars to follow simple rules. Otherwise, they will be unpredictable by humans, which is a huge no-no. It will be extremely stressful if we are surrounded by vehicles the actions of which we cannot unconsciously predict.
jfandl
5 / 5 (1) Jun 23, 2016
The car must protect the driver and it's passengers at all costs. The assumption is that the road itself will be free of hazards. If a person, child, animal, group of people decide to enter the road they need to understand that they must ALSO follow the rules. If they chose to do this where they should not enter the road or when a crosswalk shows do not walk and they choose to disobey, there will be a steep cost for them to pay but it's on them. There's going to be some pedestrian deaths but if you watch what people do today (in America) they enter the road without even looking many times. Think of the car like the train. You wouldn't dare enter the train crossing and risk your life with a train coming. The same has to hold true for cars. Stupidity does not give people the right of way. The laws of nature gets rid of the weak and ignorant. Note: In many OTHER countries you don't dare enter the street until it's clear.
dogbert
not rated yet Jun 23, 2016
A car cannot make a moral decision. The idea that the car can be a moral creature is absurd. If you insist on autonomous vehicles, such vehicles carrying passenger(s) should programmed to preserve themselves (and thus their passengers). As a side effect of protecting its passenger, the vehicle will automatically avoid hitting anything if possible.

If you do otherwise, consider that you have designed a machine which has been programmed to murder a human being. How many robots will be allowed to exist when it becomes known that robots are being programmed to murder?
antialias_physorg
3.7 / 5 (3) Jun 24, 2016
In all these hypothetical scenarios we are seemingly requesting that autonomous cars have omniscient capabilities. I find this somewhat bizarre.
The cars have sensors and they have the ability to log the sensor input. So it's possible to reconstruct the scenario after the accident. In this case I think it is OK if the scenario shows that the car reacted reasonably* and did not make any gross errors that NO human under the same split-second pressure situation would have made.

*where 'reasonably' would be to prevent damage to any living being for as long as possible. I don't hold with these contrived scenarios where ethical decisions have to be made. Those are always subjective (as the article quite plainly shows). But 'for as long as possible' is objective and no one can argue with that in court.
krundoloss
5 / 5 (2) Jun 24, 2016
After reading others comments, I agree that these scenarios go too far. All we can do is program the autonomous vehicles to try to avoid collisions, and to collide with the fewest objects that it can. Much like trains, buses, even garage doors, if you get in the way of a moving machine, there is only so much that can be done in the design of that machine to prevent injury.
BrettC
5 / 5 (1) Jun 24, 2016
A car cannot make a moral decision. The idea that the car can be a moral creature is absurd.


It's not the cars computer making the moral decision, it's the mirrored morality made by the human programmers that are under scrutiny.
antialias_physorg
3 / 5 (4) Jun 24, 2016
You're asking programmers to make moral decisions for all possible FUTURE cases (known AND unknown). That's absurd.
Programmers need the car to obey the rules of the road. Period. That's all you are asked as a driver to do, too. No one is asking you to take a preemptive morality test before you get your driver's license. (unless I missed that question about "what would you do if 5 people are besides the road and you can either crash them or kill yourself crashing into a wall" on my driver's' test.)
BrettC
not rated yet Jun 24, 2016
We have built in morality, something a computer does not, and have developed it by the time we reach driving age(hopefully).We take care in driving because we understand the consequences if we do not. A computer has no such ability. That's on reason why there is a driving age limit.

How much effort do you take to avoid hitting a small rodent? Would you make the same amount of effort to avoid hitting a pet? How about your own pet? How about a human? How about your own child?
antialias_physorg
1 / 5 (1) Jun 24, 2016
We have built in morality

Really? Everyone? Last I checked any mass murderer can get a driver's license - no questions asked (besides those that are required to obey rules of the road)
There's plenty of kids that drive too fast and end up dead with all the other kids in the car along with them - don't tell me they understood the consequences of their actions.

The consequences of actions are already put into rules. Those are called the rules of the road. That's why e.g. the speed limits are the way they are.

How about your own pet? How about a human? How about your own child?

What has that got to do with what programmers should be responsible for?
kochevnik
5 / 5 (1) Jun 24, 2016
Intelligence agencies are best to decide who lives and dies, as autonomous vehicles allow assassinations to become automated with user-controlled plausible deniability interface
Eikka
5 / 5 (1) Jun 24, 2016
The rate of cars causing accidents will be negligible compared to the rate of people causing accidents.


That's placing too much unwarranted faith on the technology.

It will be a long long time before cars and AI in general is even able to make the kind of decisions required here, or drive as competently as people, because the sensor data processing systems aren't sophisticated enough to tell a child from a flying plastic bag and the computer systems involved for making the final decisions are literally dumber than a flatworm because of cost.

Self-driving cars won't be better than people by default because they have to catch up to humans first, and the moment they're arguably better than the average person the corporations stop improving them on the point that they "don't need to be" better.
adam_russell_9615
not rated yet Jun 25, 2016
I think the poll itself was flawed. When they asked they hypothetical question of whether the car should be programmed to save the most lives the people obviously weren't given enough time to give it sufficient consideration. This is a complex question distilled into a quick poll question. Thats bogus. When given the alternate side of the coin to consider, people came up with a different answer, not because they are hypocrits but because they hadnt considered that in the short time they thought about the first question.
dogbert
not rated yet Jun 25, 2016
adam_russell_9615,

The hypothetical is just another rendition of the Trolly problem ( https://en.wikipe..._problem ). That hypothetical is designed to confuse people into thinking it is OK to murder someone. It is not OK. It is certainly not OK to program a robot to kill someone. But the appropriate action is more difficult to see when couched in a hypothetical designed to confuse.
Osiris1
not rated yet Jun 26, 2016
Grumpy person in car they bought proceed into intersection; car full of drunks runs stoplight and threatens to hit grumpy person's autocar. Autocar calculates....greatest good for greatest number and swerves and hits hard object, crippling grumpy person. Grumpy person sues autocar manufacturer. Suit is in Mississippi so no limit to claim, therefore claim is for $100,000,000,000.00 dollars. Grump is from Jackson, Ms. and car company is based in New Jersey so hick judge finds for Grumpy. End of autocar problem on American roads.
dogbert
1 / 5 (1) Jun 26, 2016
I have not been able to discern the reason automobile manufacturers are so intent on deploying semi-autonomous cars with the goal of eventually deploying fully autonomous cars. Everyone who drives assumes a liability for accidents. Most of us keep insurance to help cover that liability. A car manufacturer who installs a cruise control which accelerates and decelerates and perhaps changes lanes automatically or a braking system which brakes without user intervention assumes the liability for the accidents caused by these semi-autonomous systems. The liability is 100% for vehicles which are being controlled 100% autonomously.

Why do car manufacturers want to accept such massive liability?
Tenstats
not rated yet Jun 30, 2016
Also, possibility for pedestrians to fake being hit (or actually see that they get hit) to and file a false claim (fraud). There might be other ways for criminals to take advantage of driverless cars as well.
TheGhostofOtto1923
1 / 5 (1) Jun 30, 2016
That's placing too much unwarranted faith on the technology.

It will be a long long time before cars and AI in general is even able to make the kind of decisions required here, or drive as competently as people
Eikka is often good at keeping up with tech developments but in this case it's a mystery why his opinions are at least 5 years old.

Self-driving cars are already better than humans and they can certainly discern between bags and kids.

I've posted the links before but he doesn't want to learn so why bother?

As to morality these cars are much safer than human drivers and that makes their use morally superior.

We can discuss it ad infinitum but it's the insurance companies that will force the conversion, beginning with habitual offenders.

Insurance rates already go up automatically at 65. The option to go auto-auto will be the only affordable choice for more and more people.
TheGhostofOtto1923
3 / 5 (2) Jun 30, 2016
Why do car manufacturers want to accept such massive liability?
Insurance companies will save money increasing rates for companies while decreasing payouts for human-caused accidents. But AI cars can be constantly improved from lessons learned while humans can't. Accidents will become rarer and rarer as the tech matures. Frequency per capita as well as severity will plummet.

AI will also have the ability to automatically record and report human lawbreakers, aggressive and erratic drivers, faulty traffic lights, potholes, missing signs, broken taillights on other vehicles, etcetcetc... thereby further reducing accidents and liability issues. And this will all get better over time.

Insurance companies will force the transition.
kochevnik
5 / 5 (1) Jun 30, 2016
Insurance companies will transfer the liability onto taxpayers and demand bailouts, which is the USA way
dogbert
not rated yet Jul 01, 2016
TheGhostofOtto1923,
Self-driving cars are already better than humans and they can certainly discern between bags and kids.


A Tesla under automatic control in May failed to discern that a transfer truck was not the sky and killed its operator.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.