Self-driving cars need 'adjustable ethics' set by owners

August 25, 2014 by David Tuffley, The Conversation
One of the self-drive cars already being used by Google in Nevada, in the US. Credit: EPA/Google

One of the issues of self-driving vehicles is legal liability for death or injury in the event of an accident. If the car maker programs the car so the driver has no choice, is it likely the company could be sued over the car's actions.

One way around this is to shift liability to the car owner by allowing them to determine a set of values or options in the event of an accident.

People are likely to want to have the option to choose how their vehicle behaves, both in an emergency and in general, so it seems the issue of adjustable ethics will become real as robotically controlled vehicles become more common.

Self-drive is already here

With already legal to drive on public roads in a growing number of US states, the trend is spreading around the world. The United Kingdom will allow these vehicles from January 2015.

Before there is widespread adoption, though, people will need to be comfortable with the idea of a computer being in full control of their vehicle. Much progress towards this has been made already. A growing number of cars, including mid-priced Fords, have an impressive range of accident-avoidance and driver-assist technologies like , automatic braking, lane-keeping and parking assist.

People who like driving for its own sake will probably not embrace the technology. But there are plenty of people who already love the convenience, just as they might also opt for automatic transmission over manual.

Are they safe?

After almost 500,000km of on-road trials in the US, Google's test cars have not been in a single accident while under computer control.

Computers have faster reaction times and do not get tired, drunk or impatient. Nor are they given to road rage. But as accident-avoidance and driver-assist technologies become more sophisticated, some ethical issues are raising their heads.

You’ll be amazed by what you find out about the man in the driving seat.

The question of how a self-driven vehicle should react when faced with an accident where all options lead to varying numbers of deaths of people was raised earlier this month.

This is an adaptation of the "trolley problem" that ethicists use to explore the dilemma of sacrificing an innocent person to save multiple innocent people; pragmatically choosing the lesser of two evils.

An astute reader will point out that, under normal conditions, the car's collision-avoidance system should have applied the brakes before it became a life-and-death situation. That is true most of the time, but with cars controlled by artificial intelligence (AI), we are dealing with unforeseen events for which no design currently exists.

Who is to blame for the deaths?

If install a "do least harm" instruction and the car kills someone, they create legal liability for themselves. The car's AI has decided that a person shall be sacrificed for the greater good.

Had the car's AI not intervened, it's still possible people would have died, but it would have been you that killed them, not the car maker.

Car makers will obviously want to manage their risk by allowing the user to choose a policy for how the car will behave in an emergency. The user gets to choose how ethically their vehicle will behave in an emergency.

As Patrick Lin points out, the options are many. . You could be:

  • democratic and specify that everyone has equal value
  • pragmatic, so certain categories of person should take precedence, as with the kids on the crossing, for example
  • self-centred and specify that your life should be preserved above all
  • materialistic and choose the action that involves the least property damage or .
Planning for the unpredictable accident – so who’s to blame? Credit: Flickr/Johannes Ortner, CC BY-NC

While this is clearly a legal minefield, the car maker could argue that it should not be liable for damages that result from the user's choices – though the maker could still be faulted for giving the user a choice in the first place.

Let's say the car maker is successful in deflecting liability. In that case, the user becomes solely responsible whether or not they have a well-considered code of ethics that can deal with life-and-death situations.

People want choice

Code of ethics or not, in a recent survey it turns out that 44% of respondents believe they should have the option to choose how the car will behave in an emergency.

About 33% thought that government law-makers should decide. Only 12% thought the car maker should decide the ethical course of action.

In Lin's view it falls to the car makers then to create a code of ethical conduct for robotic cars. This may well be good enough, but if it is not, then government regulations can be introduced, including laws that limit a car maker's liability in the same way that legal protection for vaccine makers was introduced because it is in the public interest that people be vaccinated.

In the end, are not the tools we use, including the computers that do things for us, just extensions of ourselves? If that is so, then we are ultimately responsible for the consequences of their use.

Explore further: The ethics of driverless cars

Related Stories

The ethics of driverless cars

August 21, 2014

Jason Millar, a PhD Candidate in the Department of Philosophy, spends a lot of time thinking about driverless cars. Though you aren't likely to be able to buy them for 10 years, he says there are a number of ethical problems ...

Really smart cars are ready to take the wheel

July 17, 2014

Why waste your time looking for a place to park when your car can do it for you? An idea that was pure science fiction only a few years ago is becoming reality thanks to automatic robot cars.

Should your driverless car kill you to save a child's life?

August 1, 2014

Robots have already taken over the world. It may not seem so because it hasn't happened in the way science fiction author Isaac Asmiov imagined it in his book I, Robot. City streets are not crowded by humanoid robots walking ...

Audi tests its A7 driverless vehicle on Florida highway

July 29, 2014

German automaker Audi made use of a Florida law passed in 2012 that allows for testing driverless vehicles on Florida highways this past Sunday and Monday, by requesting a shutdown of Tampa's Lee Roy Selmon Expressway—engineers ...

Sweden joins race for self-driving cars

December 2, 2013

A hundred self-driving Volvo cars will roll onto public roads in and around the Swedish city of Gothenburg in 2017, the Chinese-owned car maker said Monday.

Recommended for you

Study suggests trees are crucial to the future of our cities

March 25, 2019

The shade of a single tree can provide welcome relief from the hot summer sun. But when that single tree is part of a small forest, it creates a profound cooling effect. According to a study published today in the Proceedings ...

Matter waves and quantum splinters

March 25, 2019

Physicists in the United States, Austria and Brazil have shown that shaking ultracold Bose-Einstein condensates (BECs) can cause them to either divide into uniform segments or shatter into unpredictable splinters, depending ...

Apple pivot led by star-packed video service

March 25, 2019

With Hollywood stars galore, Apple unveiled its streaming video plans Monday along with news and game subscription offerings as part of an effort to shift its focus to digital content and services to break free of its reliance ...


Adjust slider to filter visible comments by rank

Display comments: newest first

4 / 5 (4) Aug 25, 2014
"Who is to blame for the deaths?"
That about sums it up. Our current legal system has lead to the belief that if someone suffers a traumatic death, someone (or some deep pocket faceless corporation) must be at fault and must be made to pay.
4.5 / 5 (2) Aug 25, 2014
drel, pretty sure the concepts of assigning blame and wanting justice and answers has been part of humanity for a few thousand years at least.
1 / 5 (3) Aug 25, 2014
Note the reference to "liability mitigation" methodologies for vaccines "because it is in the public interest that people be vaccinated". But the New World Order keeps insisting vaccines are harmless, they are perfectly safe, they do everything they are supposed to, one hundred percent of the time! Now, we find they have special, unacknowledged, secret "government" programs to manage liability. Because it's important that everyone be vaccinated, even if a vaccine causes someone go blind or lose their mind. It's being vaccinated that's important, not being a whole person. So very similar to the development swindle called "brownfields". It touted that it have found a way "to limit liability of building on contaminated soil". It was intended to convince the gullible that they necessarily had developed a means of cleaning sites. In fact, they only introduced the depraved LLC, "limited liability corporation", concept.
5 / 5 (1) Aug 26, 2014
End User License Agreement.

Wherein you, the driver, accept all responsibility, etc.

What sort of brain seizure would prevent the legal department of whatever manufacturer from distributing their driving software any other way? It's otherwise an intractable ethical problem.

You can't measure ethics. Probably can't even get two ethicists to agree on what it means. Any belief that you've got it covered in a piece of software is pretentious.
not rated yet Aug 28, 2014
The real argument here is that if roads become dominated by self driving cars then they will have to be reclassified and treated more like railway lines. It is common knowledge that most trains can not stop in time or see most obstacles this still works in our society so we may end up only being able to allow self driving cars on fenced in roads that fine people attempting to enter.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.