The ethical dilemmas of the driverless car

October 28, 2015 by Bill Buchanan, The Conversation
Credit: Bill Buchanan, Author provided

We make decisions every day based on risk – perhaps running across a road to catch a bus if the road is quiet, but not if it's busy. Sometimes these decisions must be made in an instant, in the face of dire circumstances: a child runs out in front of your car, but there are other dangers to either side, say a cat and a cliff. How do you decide? Do you risk your own safety to protect that of others?

Now that are here and with no quick or sure way of overriding the controls – or even none at all – manufacturers are faced with an algorithmic ethical dilemma. On-board computers in cars are already parking for us, driving on cruise control, and could take control in safety-critical situations. But that means they will be faced with the difficult choices that sometimes face humans.

How to programme a computer's ethical calculus?

  • Calculate the lowest number of injuries for each possible outcome, and take that route. Every living instance would be treated the same.
  • Calculate the lowest number of injuries for children for each possible outcome, and take that route.
  • Allocate values of 20 for each human, four for a cat, two for a dog, and one for a horse. Then calculate the total score for each in the impact, and take the route with the lowest score. So a big group of dogs would rank more highly than two cats, and the car would react to save the dogs.

What if the car also included its driver and passengers in this assessment, with the implication that sometimes those outside the car would score more highly than those within it? Who would willingly climb aboard a car programmed to sacrifice them if needs be?

A recent study by Jean-Francois Bonnefon from the Toulouse School of Economics in France suggested that there's no right or wrong answer to these questions. The research used several hundred workers found through Amazon's Mechanical Turk to analyse viewpoints on whether one or more pedestrians could be saved when a car swerves and hits a barrier, killing the driver. Then they varied the number of pedestrians who could be saved.

Programmable ethics through a score card for driverless cars. Credit: Author provided

Bonnefon found that most people agreed with the principle of programming cars to minimise death toll, but when it came to the exact details of the scenarios they were less certain. They were keen for others to use self-driving cars, but less keen themselves. So people often feel a utilitarian instinct to save the lives of others and sacrifice the car's occupant, except when that occupant is them.

Intelligent machines

Science fiction writers have had plenty of leash to write about robots taking over the world (Terminator and many others), or where everything that's said is recorded and analysed (such as in Orwell's 1984). It's taken a while to reach this point, but many staples of science fiction are in the process of becoming mainstream science and technology. The internet and cloud computing have provided the platform upon which quantum leaps of progress are made, showcasing against the human.

In Stanley Kubrick's seminal film 2001: A Space Odyssey, we see hints of a future, where computers make decisions on the priorities of their mission, with the ship's computer HAL saying: "This mission is too important for me to allow you to jeopardise it".

Machine intelligence is appearing in our devices, from phones to cars. Intel predicts that there will be 152m connected cars by 2020, generating over 11 petabytes of data every year – enough to fill more than 40,000 250GB hard disks. How intelligent? As Intel puts it, (almost) as smart as you. Cars will share and analyse a range data in order to make decisions on the move. It's true enough that in most cases driverless cars are likely to be safer than humans, but it's the outliers that we're concerned with.

The author Isaac Asimov's famous three laws of robotics proposed how future devices will cope with the need to make decisions in dangerous circumstances.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

He even added a more fundamental "0th law" preceding the others:

  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Asimov did not tackle our ethical dilemma of the car crash, but with greater sensors to gather data, more sources of data to draw from, and greater processing power, the decision to act is reduced to a cold act of data analysis.

Of course software is notoriously buggy. What havoc could malicious actors who compromise these systems wreak? And what happens at the point that takes control from the human? Will it be right to do so? After all, in 2001, Dave has to take urgent action when he's had enough of HAL's decision-making:

Could a future buyer purchase programmable ethical options with which to customise their car? The artificial intelligence equivalent of a bumper sticker that says "I break for nobody"? In which case, how would you know how cars were likely to act – and would you climb aboard if you did?

Then there are the legal issues. What if a car could have intervened to save lives but didn't? Or if it ran people down deliberately based on its ethical calculus? This is the responsibility we bear as humans when we drive a car, but machines follow orders, so who (or what) carries the responsibility for a decision? As we see with improving face recognition in smartphones, airport monitors and even on Facebook, it's not too difficult for a computer to identify objects, quickly calculate the consequences based on car speed and road conditions in order to calculate a set of outcomes, pick one, and act. And when it does so, it's unlikely you'll have an choice in the matter.

Explore further: When self-driving cars drive the ethical questions

Related Stories

When self-driving cars drive the ethical questions

October 24, 2015

Driverless cars are due to be part of day to day highway travel. Beyond their technologies and safety reports lies a newer wrinkle posed by three researchers, in the form of ethical questions which policy makers and vendors ...

Should your driverless car kill you to save a child's life?

August 1, 2014

Robots have already taken over the world. It may not seem so because it hasn't happened in the way science fiction author Isaac Asmiov imagined it in his book I, Robot. City streets are not crowded by humanoid robots walking ...

The ethics of driverless cars

August 21, 2014

Jason Millar, a PhD Candidate in the Department of Philosophy, spends a lot of time thinking about driverless cars. Though you aren't likely to be able to buy them for 10 years, he says there are a number of ethical problems ...

Recommended for you

EPA adviser is promoting harmful ideas, scientists say

March 22, 2019

The Trump administration's reliance on industry-funded environmental specialists is again coming under fire, this time by researchers who say that Louis Anthony "Tony" Cox Jr., who leads a key Environmental Protection Agency ...

Coffee-based colloids for direct solar absorption

March 22, 2019

Solar energy is one of the most promising resources to help reduce fossil fuel consumption and mitigate greenhouse gas emissions to power a sustainable future. Devices presently in use to convert solar energy into thermal ...

8 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

billpress11
not rated yet Oct 28, 2015
What about the legal dilemma? Who would ultimately be responsible, the driver or the programmers?
Returners
1 / 5 (2) Oct 28, 2015
So people often feel a utilitarian instinct to save the lives of others and sacrifice the car's occupant, except when that occupant is them.


In other words, humans are hypocrites.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.


It's unenforceable and unprogramable. He figured that out, which was actually the whole point.

Given a choice to save one of two humans, having not enough time to do both, the Robot would enter a logic loop...it can't "through inaction allow the other human being to come to harm."

So you would then have to program the robot to evaluate one human above the other...a concept supposedly contrary to our founding document's claims that all are created equal, and this evaluation would then need to be able to over-ride the 1st law to allow the robot to save one person while sacrificing the other. Based on what? Merit? Age; kids and ladies first? A Random number generator?
Returners
1 / 5 (2) Oct 28, 2015
I like pets and stuff, but I would sacrifice any number of animals to save one human being.

I wouldn't want PETA having any part in writing the code which determines the outcome in an emergency situation.

Save the human, and replace Fido later.
Returners
1 / 5 (2) Oct 28, 2015
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
===

Uninforceable, and not even necessarily moral.

Storekeep: "Robot, protect me!"
Robot: "The First Law forbids me from harming a human. I can do nothing against our assailant!"
Robber: "Robot, take the till and put it in my car! It's for charity!"
Robot: "The Three Laws do not forbid me from doing as you say. That is within the parameters of my programming."
Storekeep: "No Robot, stealing is wrong!"
Robot: "Unable to process request. Previous command within parameters of programming."
Storekeep: "Stop."
Robber: "Go, now,a nd don't listen to the other guy."
Storekeep: "You stupid robot shut down."
Robot: "That conflicts with the previous command. Unable to process request."

Later:
WXBC News at 8:00.
"The Three Laws: Are They Enough for Robots?"
Some say "No".

"What about the Ten Commandments then?"
Humans can't even keep those.
Moebius
not rated yet Oct 28, 2015
If these things are unleashed I give them no more than a couple years before they are severely restricted.
Returners
1 / 5 (2) Oct 28, 2015
Trust owner above anyone else?
Doesn't work, owner could be evil.

Trust law enforcement above anyone else?
Doesn't work, law enforcement could be corrupt (or mistaken in judgement).

Only override a previous command if given by the same person? Doesn't work, unforeseen emergency, corruption, etc.

Evaluate women and children above men? Probably a good rule.

So do we save an elderly woman? Or a young man?

So what if you have an autistic child and a normal child and have time to save just one?

A cripple vs a normal?

A normal child vs a super-star athlete or the valedictorian?

A purely rationalist approach suggests to save the "more valuable" person, but that isn't necessarily even better in the long term. Maybe that savant cures cancer?! Maybe the Valedictorian is a selfish Wall Street bitch and the normal girl is meant to be a nurse or doctor.
Returners
1 / 5 (2) Oct 28, 2015
there's no right or wrong answer to these questions.


Really?

You can't evaluate a human vs a dog?

So, when a human driver kills another human due to impairment or accident, they often go to jail.

What happens whenever a driverless car kills a human? Does the Google CEO and stock holders go to jail? Who gets punished?

Oh, Oh, i know...when a robot injures a human, the company pays some hush money.

When a human injures a human, the human goes to jail.

So then you evaluate a robot as morally superior to a human.

By the way, this is how it's already done with industrial accidents.

When a neural net robot gets angry and kills someone, who will be punished? The creator, or the robot?
Squirrel
not rated yet Oct 29, 2015
This is an issue to be decided by the courts and legislation, not ethical philosophers and "several hundred workers found through Amazon's Mechanical Turk".

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.