What drives decisions by autonomous vehicles in dire situations?

December 21, 2016 by Pat Harriman
A UCI professor and colleagues have created an online survey platform called the Moral Machine to learn what people think a self-driving car should prioritize in the event of a fatal accident – and it’s not necessarily the lives of its occupants. Credit: Thinkstock

Despite dramatic reductions in accident-related fatalities, injuries and damages, as well as significant improvements in transportation efficiency and safety, consumers aren't as excited about the promise of autonomous vehicles as the auto industry is. Research shows that people are nervous about life-and-death driving decisions being made by algorithms rather than by humans. Who determines the ethics of the algorithms?

Bill Ford Jr., executive chairman of Ford Motor Co., said recently that these ethics must be derived from "deep and meaningful conversations" among the public, the , the government, universities and ethicists.

Azim Shariff, assistant professor of psychology & social behavior at the University of California, Irvine, and his colleagues – Iyad Rahwan, associate professor of media arts & sciences at the MIT Media Lab in Cambridge, Mass., and Jean-Francois Bonnefon, a research director at the Toulouse School of Economics in France – have created an online survey platform called the Moral Machine to help promote that discussion.

Launched in May, it has already drawn more than 2.5 million participants from over 160 countries.

Although will reduce the frequency, accidents will still happen. People taking the Moral Machine survey have the opportunity to share their opinions about which algorithmic decisions are most ethical for a vehicle to make.

Thirteen scenarios are presented in which there will be at least one, if not multiple, fatalities. Victims can be passengers, pedestrians or even pets. Humans are characterized by sex, age, fitness level and social status, such as physician or criminal. Participants are asked what they think the car should do in each case.

"We want to gauge where people's moral priorities are in situations where have to weigh the risks of harming different individuals or animals," Shariff explains. "Do people prefer entirely utilitarian algorithms – where the car prioritizes the greatest good for the greatest number of people? Or do we think it's more ethical for a car to prioritize the lives of its passengers? Should young lives be valued over older lives?"

Results are tabulated to show each survey taker where he or she falls along a sliding scale of "does not matter" to "matters a lot" for a variety of preferences – including "saving more lives," "protecting passengers" and "avoiding intervention" – as well as for gender, age, species, fitness and social value. The site also provides the average of all previous responses, so people can see how their ethical intuitions compare to the majority.

"At the end, participants have the option to help us better understand their judgments by choosing to answer additional questions about their personal trust of machines and willingness to buy a self-driving car," Shariff says. "The research we've already published on the ethics of autonomous vehicles has been met with a lot of interest. The international reach of the Moral Machine allows us to vastly increase the breadth of this research and study a much wider array of societies that will be affected in a future dominated by self-driving cars."

Explore further: Driverless cars: Who gets protected? Study shows public deploys inconsistent ethics on safety issue

More information: J.-F. Bonnefon et al. The social dilemma of autonomous vehicles, Science (2016). DOI: 10.1126/science.aaf2654

Related Stories

When self-driving cars drive the ethical questions

October 24, 2015

Driverless cars are due to be part of day to day highway travel. Beyond their technologies and safety reports lies a newer wrinkle posed by three researchers, in the form of ethical questions which policy makers and vendors ...

Helping autonomous vehicles and humans share the road

November 16, 2016

A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle – and cyclist ...

Recommended for you

18 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

antialias_physorg
4.8 / 5 (4) Dec 21, 2016
Thirteen scenarios are presented in which there will be at least one, if not multiple, fatalities. Victims can be passengers, pedestrians or even pets.

Is it just me or does anyone else find these scenarios completely irrelevant? Even with the many accidents we have in a non-autonomous world I seriously doubt that the number of conscious decisions by drivers to "hit A" vs "hit B" in a sure-fire accident is present in any but the most negligible of number of cases.

And I have yet to hear about any court of law judging someone guilty because they should have hit the single person instead of the group.

When we start to demand superhuman decision abilities of algorithms that demonstrably lower fatalities - and thereby judge them 'unfit' - then something is amiss.
entrance
3 / 5 (2) Dec 21, 2016
So everyone is assigned a value? And this value is regularly updated, depending on social and economic aspects? Politicans receive the highest rating, followed by police, military and fire brigade? Every job is rated individually? And your value is 0, if you are unemployed?

In the first moment I would say this is totally absurd in every respect. On the other hand, this is also tempting. I know some people I would really like to give a 0. Humanity wouldn't miss them, if they were killed in an accident. Especially i wouldn't miss them. I wonder if you could assign a negative value to certain people, so that selfdriving cars would try to catch them. LOL

It's a crazy topic.
dogbert
not rated yet Dec 21, 2016
The terms moral and ethical should not be applied to machines or the algorithms operating those machines.

People are not supporting efforts to create autonomous vehicles for many reasons, and such idiocy as this is among those reasons.
adam_russell_9615
5 / 5 (1) Dec 21, 2016
The AI needs to keep the safety of the driver and passengers as highest priority. No one is buying a car that may some day decide that they need to die to save a jay walking pedestrian.
Zzzzzzzz
not rated yet Dec 21, 2016
My intuitive thought on this subject is that people don't immediately trust the automated response because they want to have the ability to identify with the object of their trust. Most of us can identify with the kindly bus driver who we can imagine as one of our relatives or friends, and therefore have some level of trust in their judgement. If I make a mistake and kill myself or my family, no one gets sued. If an automated vehicle is involved, someone will get sued no matter the circumstances.
travisr
not rated yet Dec 21, 2016
Humans are the best at these moral decisions, because they don't make them in the time frame available. Its all instinct, no hurt feelings. But when you have to predesignate killing a bus full of children instead of a driver, or sacrificing the driver instead of the bus we all get a little squeamish. Worse yet, what happens when the AI probabilities are off, and the move kills both. Well, I guess we should all simply rejoice someone made a conscious best guess, rather then a reptilian response like jack the brakes and scream.
thisisminesothere
not rated yet Dec 21, 2016

Is it just me or does anyone else find these scenarios completely irrelevant? Even with the many accidents we have in a non-autonomous world I seriously doubt that the number of conscious decisions by drivers to "hit A" vs "hit B" in a sure-fire accident is present in any but the most negligible of number of cases.

And I have yet to hear about any court of law judging someone guilty because they should have hit the single person instead of the group.

When we start to demand superhuman decision abilities of algorithms that demonstrably lower fatalities - and thereby judge them 'unfit' - then something is amiss.


I think the point is that these cars will have reaction times that are dramatically faster than a human, thus the ABILITY to make a decision. So had you the ability to make a decision the moment before it happened, what decision would you make? Its not that humans can, just that if we could, what is the right decision? If there even is one.
dogbert
5 / 5 (1) Dec 21, 2016
thisisminesothere,
The point should be that human beings can make moral our ethical decisions in a crisis, machines cannot.

A human driver may decide to kill him/her self to prevent harm to another human our even an animal. That would be a moral decision. A programmer at Google or Ford Motor Company deciding to kill a driver is not moral and is in fact immoral.

An autonomous driving car should be designed to drive safely and to try to prevent harm to the people in the vehicle.

Trying to make a moral car is simply insane.
thisisminesothere
not rated yet Dec 21, 2016
So whats your answer to this question then? How should these cars be programmed? STRICTLY to save the lives of the passengers and nothing else matters?

I am not disagree with you if thats the case, just curious where you lie with this.
KorvusKorax
not rated yet Dec 21, 2016
Over the past few years I have casually followed along with these articles on the inevitable grind toward realization they represent, but I'm not sure I understand the current discussion (determining a better outcome given a terrible situation).

Surveys like these may serve their purpose (socio-economic profiling) but given the heaps of new technology developed and applied to this issue, I can only expect it to continue advancement to the point of entirely avoiding the need for such decision making these surveys present.

If the technology provided has the opportunity and capability to make these kinds of calls, could they not go further in preventing such scenarios from ever happening?
KorvusKorax
not rated yet Dec 21, 2016
I can only imagine (perhaps naively) some provision that would spread a warning of an incident much more rapidly through approaching traffic than even drivers themselves (given conditions) could notice from afar, having driven and autonomous vehicles reduce speed or even reroute entirely while giving the driver and passengers an explanation.

Also, rather than determine "who to hit" I would prefer it calculate "how to hit" them: making best use of crumple zone engineering, adjusting angle of impact, not colliding into passenger bays directly, or even determining if there are passengers on the side of the vehicle to be impacted. Of course, all of that and more is on top of determining road conditions, traffic density in both directions, whether to lock out driver action, and so on.

"Success" will be determined by the programming or theory being borne out in reducing any and all incidents to near-misses or at worst minor accidents (zero loss of life).
dogbert
not rated yet Dec 21, 2016
thisisminesothere,

So whats your answer to this question then?


The question itself is wrong. You can't make a moral car. Programmers in California or Redmond or at the Ford or Chevrolet dealerships cannot make a moral decision in a piece of software about hypothetical situations. If your car is programmed to kill you, the people who programmed it have committed an immoral act. They have decided that you need to be killed and that is premeditated murder.

They should program the car to drive as safely as is practical and to protect the people riding in it. This is what you do when you drive. If you decide to kill yourself, that is your business. It is not the business of a car manufacturer to kill you.

There are many problems with autonomous cars. The illogic of thinking about who to kill and actually setting up a web site to examine how to do it is a measure of how wrong the people pushing this technology are.
thisisminesothere
not rated yet Dec 21, 2016
So your solution is to do nothing at all? Autonomous cars are coming. Theres no getting around that.

These people are looking for solutions to problems that may crop up at some random point in the future. I see it as planning ahead as best as possible. To assume that they are planning out murders is asinine.

I dont really know what the correct way to go about this is, but I dont think yours is the right one.
dogbert
not rated yet Dec 21, 2016
Not do nothing at all. Program the car to drive itself.

Consider this, if manufacturers of autonomous cars decide to program those cars to kill their owners, they won't be selling many cars.

On reconsideration, you are right. Program them to kill their owners. That will put a stop to the madness.
adam_russell_9615
not rated yet Dec 21, 2016
So whats your answer to this question then? How should these cars be programmed? STRICTLY to save the lives of the passengers and nothing else matters?

I am not disagree with you if thats the case, just curious where you lie with this.


1. Safety of driver and passengers comes first.

The only possible moral question I can see would be (if you had no other choice) would you swerve into a parked car (becoming liable for all damages) or hit the jaywalker? Naturally if you could avoid both then you avoid both, but what if you couldnt?
antialias_physorg
5 / 5 (1) Dec 22, 2016
STRICTLY to save the lives of the passengers and nothing else matters?

I would say that that is the only sensible way to go about it. Autonomous cars will be programmed (more correctly: learn) to follow the rules of the road with adequate safety buffer to allow for safe braking in any reasonably foreseable circumstance. The main purpose of such autonomous vehicles is to get their charge safely from A to B. I see no reason why that 'safely' directive should be changed in the case of an impending accident.

Of course we could get around the entire discussion by simply giving the passengers an option to set preferences what to do. But in the end I find this rather academic because the amount of computing power necessary to *forecast* the effect of a car hitting someone (will it kill? Will it just injure? Will someone be flung to the side with a bruise?) is so ludicrously high that it can't be done in such a short timespan to a point that would inform a decision reliably.
TheGhostofOtto1923
5 / 5 (1) Dec 22, 2016
So everyone is assigned a value? And this value is regularly updated, depending on social and economic aspects? Politicans receive the highest rating, followed by police, military and fire brigade? Every job is rated individually? And your value is 0, if you are unemployed?
Insurance companies, credit agencies, the justice system do this all the time. Review boards decide who deserves to get lifesaving therapy and transplants all the time. Govts decide whether to fight despots and abusers ALL THE TIME.

Your quaint mythology of blackbox justice has been delivered to you by religion. We can program justice into machines based on our own best qualities and then expect them to carry it out flawlessly.

Further we can improve their performance based on lessons learned. With humans you have to educate each new gen, and end up having to use machines to make sure they comply anyway.

Machines are better at everything because we design them to be. Design beats evolution.
TheGhostofOtto1923
5 / 5 (1) Dec 22, 2016
An AI car runs a red-light in CA. The situation is studied and the AI is improved to reduce the chance of it happening again. With humans this can NEVER happen. The best you can do is design machines to prevent them from doing it.

AI goes one better and removes the human entirely. And we have the prospect that AI can improve its own performance based on such incidents and share it with other cars v-to-v in real time. A particular road surface proves to be slipperier than anticipated. That car shares this with others in the vicinity and it becomes a permanent part of their collective experience. Tire and brake design could even be tweaked.

Humans would tend to say 'oh yeah I forgot'.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.