Self-driving cars need 'adjustable ethics' set by owners

Self-driving cars need ‘adjustable ethics’ set by owners
One of the self-drive cars already being used by Google in Nevada, in the US. Credit: EPA/Google

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car's actions.

One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.

People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.

Self-drive is already here

With already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.

Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like , automatic braking, lane-keeping and parking assist.

People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.

Are they safe?

After almost 500,000km of on-road trials in the US, Google's test cars have not been in a single accident while under computer control.

Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.

You’ll be amazed by what you find out about the man in the driving seat.

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

This is an adaptation of the "trolley problem" that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.

An astute reader will point out that, under normal conditions, the car's collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.

Who is to blame for the deaths?

If install a "do least harm" instruction and the car kills someone, they create legal liability for themselves. The car's AI has decided that a person shall be sacrificed for the greater good.

Had the car's AI not intervened, it's still possible people would have died, but it would have been you that killed them, not the car maker.

Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.

As Patrick Lin points out, the options are many. . You could be:

  • democratic and specify that everyone has equal value
  • pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example
  • self-centred and specify that your life should be preserved above all
  • materialistic and choose the action that involves the least property damage or .

While this is clearly a legal minefield, the car maker could argue that it should not be liable for damages that result from the user's choices – though the maker could still be faulted for giving the user a choice in the first place.

Let's say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.

People want choice

Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.

About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.

In Lin's view it falls to the car makers then to create a code of ethical conduct for robotic cars. This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker's liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.

In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.

This story is published courtesy of The Conversation (under Creative Commons-Attribution/No derivatives).
The Conversation

Citation: Self-driving cars need 'adjustable ethics' set by owners (2014, August 25) retrieved 24 April 2024 from https://phys.org/news/2014-08-self-driving-cars-adjustable-ethics-byowners.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The ethics of driverless cars

0 shares

Feedback to editors