Self-driving cars need 'adjustable ethics' set by owners

Self-driving cars need ‘adjustable ethics’ set by owners
One of the self-drive cars already being used by Google in Nevada, in the US. Credit: EPA/Google

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car's actions.

One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.

People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.

Self-drive is already here

With already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.

Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like , automatic braking, lane-keeping and parking assist.

People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.

Are they safe?

After almost 500,000km of on-road trials in the US, Google's test cars have not been in a single accident while under computer control.

Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.

You’ll be amazed by what you find out about the man in the driving seat.

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

This is an adaptation of the "trolley problem" that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.

An astute reader will point out that, under normal conditions, the car's collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.

Who is to blame for the deaths?

If install a "do least harm" instruction and the car kills someone, they create legal liability for themselves. The car's AI has decided that a person shall be sacrificed for the greater good.

Had the car's AI not intervened, it's still possible people would have died, but it would have been you that killed them, not the car maker.

Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.

As Patrick Lin points out, the options are many. . You could be:

  • democratic and specify that everyone has equal value
  • pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example
  • self-centred and specify that your life should be preserved above all
  • materialistic and choose the action that involves the least property damage or .
Self-driving cars need ‘adjustable ethics’ set by owners
Planning for the unpredictable accident – so who’s to blame? Credit: Flickr/Johannes Ortner, CC BY-NC

While this is clearly a legal minefield, the car maker could argue that it should not be liable for damages that result from the user's choices – though the maker could still be faulted for giving the user a choice in the first place.

Let's say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.

People want choice

Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.

About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.

In Lin's view it falls to the car makers then to create a code of ethical conduct for robotic cars. This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker's liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.

In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.

Explore further

The ethics of driverless cars

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
The Conversation

Citation: Self-driving cars need 'adjustable ethics' set by owners (2014, August 25) retrieved 18 August 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Aug 25, 2014
"Who is to blame for the deaths?"
That about sums it up. Our current legal system has lead to the belief that if someone suffers a traumatic death, someone (or some deep pocket faceless corporation) must be at fault and must be made to pay.

Aug 25, 2014
drel, pretty sure the concepts of assigning blame and wanting justice and answers has been part of humanity for a few thousand years at least.

Aug 25, 2014
Note the reference to "liability mitigation" methodologies for vaccines "because it is in the public interest that people be vaccinated". But the New World Order keeps insisting vaccines are harmless, they are perfectly safe, they do everything they are supposed to, one hundred percent of the time! Now, we find they have special, unacknowledged, secret "government" programs to manage liability. Because it's important that everyone be vaccinated, even if a vaccine causes someone go blind or lose their mind. It's being vaccinated that's important, not being a whole person. So very similar to the development swindle called "brownfields". It touted that it have found a way "to limit liability of building on contaminated soil". It was intended to convince the gullible that they necessarily had developed a means of cleaning sites. In fact, they only introduced the depraved LLC, "limited liability corporation", concept.

Aug 26, 2014
End User License Agreement.

Wherein you, the driver, accept all responsibility, etc.

What sort of brain seizure would prevent the legal department of whatever manufacturer from distributing their driving software any other way? It's otherwise an intractable ethical problem.

You can't measure ethics. Probably can't even get two ethicists to agree on what it means. Any belief that you've got it covered in a piece of software is pretentious.

Aug 28, 2014
The real argument here is that if roads become dominated by self driving cars then they will have to be reclassified and treated more like railway lines. It is common knowledge that most trains can not stop in time or see most obstacles this still works in our society so we may end up only being able to allow self driving cars on fenced in roads that fine people attempting to enter.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more