Will your self-driving car be programmed to kill you?

June 12, 2015 by Matt Windsor, University of Alabama at Birmingham
Will your self-driving car be programmed to kill you?

Imagine you are in charge of the switch on a trolley track.

The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that's why you have this switch. But on the alternate track there's more trouble: Your child, who has come to work with you, has fallen down on the rails and can't get up. That switch can save your child or a bus-full of others, but not both. What do you do?

This ethical puzzler is commonly known as the Trolley Problem. It's a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st century take, several writers (here and here, for example) have adapted the scenario to a modern obsession: . Google's self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on "major roads" by this summer.

Who watches the watchers?

The technology may have arrived, but are we ready? Google's cars can already handle real-world hazards, such as cars' suddenly swerving in front of them. But in some situations, a crash is unavoidable. (In fact, Google's cars have been in dozens of minor accidents, all of which the company blames on human drivers.) How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation—a blown tire, perhaps—where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm—even if that means choosing to slam into a retaining wall to avoid hitting an oncoming ? Who will make that call, and how will they decide?

"Ultimately, this problem devolves into a choice between utilitarianism and deontology," said UAB alumnus Ameen Barghi. Barghi, who graduated in May and is headed to Oxford University this fall as UAB's third Rhodes Scholar, is no stranger to moral dilemmas. He was a senior leader on UAB's Bioethics Bowl team, which won the 2015 national championship. Their winning debates included such topics as the use of clinical trials for Ebola virus, and the ethics of a hypothetical drug that could make people fall in love with each other. In last year's Ethics Bowl competition, the team argued another provocative question related to autonomous vehicles: If they turn out to be far safer than regular cars, would the government be justified in banning human driving completely? (Their answer, in a nutshell: yes.)

Death in the driver's seat

So should your self-driving car be programmed to kill you in order to save others? There are two philosophical approaches to this type of question, Barghi says. "Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people," he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.

Deontology, on the other hand, argues that "some values are simply categorically always true," Barghi continued. "For example, murder is always wrong, and we should never do it." Going back to the trolley problem, "even if shifting the trolley will save five lives, we shouldn't do it because we would be actively killing one," Barghi said. And, despite the odds, a shouldn't be programmed to choose to sacrifice its driver to keep others out of harm's way.

Every variation of the trolley problem—and there are many: What if the one person is your child? Your only child? What if the five people are murderers?—simply "asks the user to pick whether he has chosen to stick with deontology or utilitarianism," Barghi continued. If the answer is utilitarianism, then there is another decision to be made, Barghi adds: rule or act utilitarianism.

"Rule utilitarianism says that we must always pick the most utilitarian action regardless of the circumstances—so this would make the choice easy for each version of the trolley problem," Barghi said: Count up the individuals involved and go with the option that benefits the majority.

But act utilitarianism, he continued, "says that we must consider each individual act as a separate subset action." That means that there are no hard-and-fast rules; each situation is a special case. So how can a computer be programmed to handle them all?

"A computer cannot be programmed to handle them all," said Gregory Pence, Ph.D., chair of the UAB College of Arts and Sciences Department of Philosophy. "We know this by considering the history of ethics. Casuistry, or applied Christian ethics based on St. Thomas, tried to give an answer in advance for every problem in medicine. It failed miserably, both because many cases have unique circumstances and because medicine constantly changes."

Preparing for the worst

The members of UAB's Ethics and Bioethics teams spend a great deal of time wrestling with these types of questions, which combine philosophy and futurism. Both teams are led by Pence, a well-known medical ethicist who has trained UAB medical students for decades.

To arrive at their conclusions, the UAB team engages in passionate debate, says Barghi. "Along with Dr. Pence's input, we constantly argue positions, and everyone on the team at some point plays devil's advocate for the case," he said. "We try to hammer out as many potential positions and rebuttals to our case before the tournament as we can so as to provide the most comprehensive understanding of the topic. Sometimes, we will totally change our position a couple of days before the tournament because of a certain piece of input that was previously not considered."

That happened this year when the team was prepping a case on physician addiction and medical licensure. "Our original position was to ensure the safety of our patients as the highest priority and try to remove these physicians from the workforce as soon as possible," Barghi said. "However, after we met with Dr. Sandra Frazier"—who specializes in physicians' health issues—"we quickly learned to treat addiction as a disease and totally changed the course of our case."

Barghi, who plans to become a clinician-scientist, says that ethics competitions are helpful practice for future health care professionals. "Although physicians don't get a month of preparation before every ethical decision they have to make, activities like the ethics bowl provide miniature simulations of real-world patient care and policy decision-making," Barghi said. "Besides that, it also provides an avenue for previously shy individuals to become more articulate and confident in their arguments."

Explore further: The ethics of driverless cars

Related Stories

The ethics of driverless cars

August 21, 2014

Jason Millar, a PhD Candidate in the Department of Philosophy, spends a lot of time thinking about driverless cars. Though you aren't likely to be able to buy them for 10 years, he says there are a number of ethical problems ...

Self-driving cars need 'adjustable ethics' set by owners

August 25, 2014

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the ...

Should your driverless car kill you to save a child's life?

August 1, 2014

Robots have already taken over the world. It may not seem so because it hasn't happened in the way science fiction author Isaac Asmiov imagined it in his book I, Robot. City streets are not crowded by humanoid robots walking ...

Recommended for you

After a reset, Сuriosity is operating normally

February 23, 2019

NASA's Curiosity rover is busy making new discoveries on Mars. The rover has been climbing Mount Sharp since 2014 and recently reached a clay region that may offer new clues about the ancient Martian environment's potential ...

Study: With Twitter, race of the messenger matters

February 23, 2019

When NFL player Colin Kaepernick took a knee during the national anthem to protest police brutality and racial injustice, the ensuing debate took traditional and social media by storm. University of Kansas researchers have ...

Researchers engineer a tougher fiber

February 22, 2019

North Carolina State University researchers have developed a fiber that combines the elasticity of rubber with the strength of a metal, resulting in a tougher material that could be incorporated into soft robotics, packaging ...

13 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Pattern_chaser
not rated yet Jun 12, 2015
The outcome of this discussion isn't as important as simply having this discussion. We need to know UP FRONT how such machines will be programmed. Well published!
adam_russell_9615
5 / 5 (2) Jun 12, 2015
"Open the car door, Hal"
"Im sorry Dave. I cant do that"
Osiris1
1 / 5 (2) Jun 12, 2015
I will NEVER willingly buy a robot car. Such is simply suicide and immoral by definition. Suicide is self murder. Murder is murder. A machine that decides to take a life is a murderer, active OR latent (it has not tasted blood yet)! Any robot involved death should and will be defined as a murder and made illegal and a capital felony to manufacture, finance, or design much less go as far as to sell. Its corporate sponsors also guilty of capital conspiracy to commit murder with malice aforethought with 'special circumstances'. Such a thing could be hacked by religious fanatics to kill as many as possible...run amok anywhere at any time, like the Terminator machines of 'Skynet' in the Terminator movies.

Programs are like checkers. Like the immortal author John Steinbeck said in: "The Grapes of Wrath!", "What one man can do, another can also do!" As we can 'ethically program', the Islamic monsters can reprogram. Any bridgerail can be seen by an ISIS robot as head remover
Osiris1
1 / 5 (2) Jun 12, 2015
Some nation, somewhere, WILL make these terminator machines illegal and bind the whole world thru reciprocity treaties.

If not then the below horrific scenario can an probably will happen sure as a slow motion train wreck. Such vision will surely spur national legislators to act and overrule any so called self destructive science no matter what the so called 'ethics'.

Of course, Islamic State programmers may be able to automate the recognition of the registrations by whatever means of all cars driven, owned, or occupied by Jewish people or anyone else they reallly hate, like Protestants. I will leave to your fertile imaginations just what devilish plans these DAESH may hatch.
Anakin
not rated yet Jun 13, 2015
@adam_russell_9615
-Open the door Hoff
-I'm sorry Kung Fury. I can't let you do that.
Did anyone tell you... not to hassle the Hoff 9000?
youtube.com/watch?v=SyU0-HEJFXY
italba
not rated yet Jun 13, 2015
@Osiris1: I can't agree with you. Cars are used NOW as a lethal weapon in israel http://www.israel...taLzOdQo ed in USA too http://ktla.com/2...k-crash/ . Should the involved cars companies be charged for murder? And consider that the cars can easily be transformed in land torpedoes: Just lock the steering well and let them go! Cannot be called "suicide cars" if used that way?
antigoracle
not rated yet Jun 13, 2015
Sounds like the perfect loophole for when your robot car kills you.
pubwvj
not rated yet Jun 13, 2015
"Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,"

Your genes say that is an insane statement and that by arguing that you are insane and unfit to be included in the future generations. Bye.
Eikka
5 / 5 (1) Jun 14, 2015
This ethical puzzler is commonly known as the Trolley Problem.


It's also a loaded question, because the questioner is limiting the person to two morally unacceptable solutions where many more would be available.

A self-driving car doesn't have to be programmed to kill you - why on earth would we do that? Just like in the trolley situation, there are always many things you could try that don't involve active sacrifical of anyone involved.

Such as; yank the lever back and forth rapidly and hope that the trolley derails and hits neither. That solution is not an active choice to kill anyone, but rather an attempt to save everyone. If it goes in vain, at least you tried.

Eikka
not rated yet Jun 14, 2015
"Ultimately, this problem devolves into a choice between utilitarianism and deontology,"


Ultimately, the problem comes down to causing minimum damage to the vehicle itself, because by proxy it also causes minimum damage to bystanders.

So instead of choosing to swerve onto traffic or slamming into a wall - why not just try to stay in lane?
KBK
not rated yet Jun 14, 2015
Make it a condition of accepting a drivers license, or the use of the car on the roads. Cars would probably not be 100% autonomous, so driving would still be required. Or that the use of the autonomous car requires different legal 'agreements'.

Get the disclaimer ahead of time.

Make sure people know what they are getting into, then they cannot retroactively claim otherwise. Sign on the line.

Or go get a bicycle.

Driving is a privilege, one we train for, and are given 'license' for, in order to join in with others who are also trained and licensed....all in the same 'sandbox' -- called highway and roads.

Driving is not a 'right'.

However, it is notable that autonomous cars will change that 100 year old equation.

Vehicles will become something else entirely, legally speaking, when 5-6 year old kids can take to an autonomous box, to get home by themselves.
KBK
4 / 5 (1) Jun 14, 2015
At that point, signing on to the autonomous vehicular transport network, probably will involve accepting the transport terms ahead of time. That there will be no recourse from accepting the vehicle's decisions, legally or otherwise.

There is no doubt whatsoever that they will, statistically speaking, be safer than self driven cars. Traffic jams would be a thing of the past. You would, on average, by far, spend less time stuck in traffic jams. Also, to be able to work, or other things... while in transport.

Being statistically safer by a huge margin, is going to end the self driven car scenario. Big time.

Sad for me, as I like the rush and the thrill (cough cough), but It's a good tradeoff, and I'll seek thrills elsewhere, like parachuting or whatnot.
adam_russell_9615
not rated yet Jun 14, 2015
Instead of the manufacturer making the choices, why not make it an option for the OWNER to choose?

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.