Who's to blame when artificial intelligence systems go wrong?

August 17, 2015 by Gary Lea, The Conversation
Robots in chains but are they really to blame when AI does something wrong? Credit: maxuser

There has been much discussion of late of the ethics of artificial intelligence (AI), especially regarding robot weapons development and a related but more general discussion about AI as an existential threat to humanity.

If Skynet of the Terminator movies is going to exterminate us, then it seems pretty tame – if not pointless – to start discussing regulation and . But, as legal philosopher John Donaher has pointed out, if these areas are promptly and thoughtfully addressed, that could help to reduce existential risk over the longer term.

In relation to AI, regulation and liability are two sides of the same safety/public welfare coin. Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame – or, more accurately, get legal redress from – when something goes wrong.

The finger of blame

Taking liability first, let's consider tort (civil wrong) liability. Imagine the following near-future scenario. A driverless tractor is instructed to drill seed in Farmer A's field but actually does so in Farmer B's field.

Let's assume that Farmer A gave proper instructions. Let's also assume that there was nothing extra that Farmer A should have done, such as placing radio beacons at field boundaries. Now suppose Farmer B wants to sue for negligence (for ease and speed, we'll ignore nuisance and trespass).

Is Farmer A liable? Probably not. Is the tractor manufacturer liable? Possibly, but there would be complex arguments around duty and standard of care, such as what are the relevant industry standards, and are the manufacturer's specifications appropriate in light of those standards? There would also be issues over whether the unwanted planting represented damage to property or pure economic loss.

So far, we have implicitly assumed the tractor manufacturer developed the system software. But what if a third party developed the AI system? What if there was code from more than one developer?

Over time, the further that AI systems move away from classical algorithms and coding, the more they will display behaviours that were not just unforeseen by their creators but were wholly unforeseeable. This is significant because foreseeability is a key ingredient for liability in negligence.

To understand the foreseeability issue better, let's take a scenario where, perhaps only a decade or two after the planting incident above, an advanced, fully autonomous AI-driven robot accidentally injures or kills a human and there have been no substantial changes to the law. In this scenario, the lack of foreseeability could result in nobody at all being liable in negligence.

Blame the AI robot

Why not deem the robot itself liable? After all, there has already been some discussion about AI personhood and possible criminal liability of AI systems.

But would that approach actually make a difference here? As an old friend said to me recently:

Will AI systems really be like Isaac Asimov's Bicentennial Man – obedient to the law, with a moral conscience and a hefty bank balance?

Leaving aside whether AI systems can be sued, AI manufacturers and developers will probably have to be put back into the frame. This might involve replacing negligence with strict liability – liability applied without any need to prove fault or negligence.

Strict liability already exists for defective product claims in many places. Alternatively there could be a no fault liability scheme with a claims pool contributed to by the AI industry.

Rules and regulations

On the regulatory side, development of rigorous and establishing safety certification processes will be absolutely essential. But designing and operating a suitable framework of institutions and processes will be tricky.

AI expert input will be needed in establishing any framework because of the complexity of the area and the general lack of understanding outside the AI R&D community. This also means that advisory committees to legislatures and governments should be established as soon as possible.

Acknowledging that there are potentially massive benefits to AI, there will be an ongoing balancing act to create, update and enforce standards and processes that maximise public welfare and safety without stifling innovation or creating unnecessary compliance burdens.

Any framework developed will also have to be flexible enough to take account of both local considerations (the extent of own production versus import of AI technology in each country) and global considerations (possible mutual recognition of safety standards and certification between countries, the need to comply with any future international treaties or conventions etc).

So as we travel down the AI R&D path, we really need to start shaping the rules surrounding AI, perhaps before it's too late.

We've already started discussions around driverless cars – see here and here – but there's so much more to deal with when it comes to AI.

What do we do next? Over to you.

Explore further: Researchers examines the true state of artificial intelligence

Related Stories

Scientists urge artificial intelligence safety focus

January 12, 2015

The development of artificial intelligence is growing fast and hundreds of the world's leading scientists and entrepreneurs are urging a renewed focus on safety and ethics to prevent dangers to society.

Recommended for you

Archaeologists discover Incan tomb in Peru

February 16, 2019

Peruvian archaeologists discovered an Incan tomb in the north of the country where an elite member of the pre-Columbian empire was buried, one of the investigators announced Friday.

Where is the universe hiding its missing mass?

February 15, 2019

Astronomers have spent decades looking for something that sounds like it would be hard to miss: about a third of the "normal" matter in the Universe. New results from NASA's Chandra X-ray Observatory may have helped them ...

What rising seas mean for local economies

February 15, 2019

Impacts from climate change are not always easy to see. But for many local businesses in coastal communities across the United States, the evidence is right outside their doors—or in their parking lots.

The friendly extortioner takes it all

February 15, 2019

Cooperating with other people makes many things easier. However, competition is also a characteristic aspect of our society. In their struggle for contracts and positions, people have to be more successful than their competitors ...

6 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

dragolineage
not rated yet Aug 17, 2015
I am kinda interested in the ethics of AI. Most everyone keeps saying that we should shackle an AI so that it wouldn't turn on us, but ignoring the fact that it would be virtually impossible(I mean a sufficiently advanced AI running on a sufficiently advanced hardware would think so fast, it could think in one second what it would take for us until the sun dies down) would it be ethical to do so? If we 'outrank' other animals cause of our self-awareness and intelligence, would a creature, whether mechanical or biological that is self-aware and has a much higher intelligence than humans 'outrank' us? Or, since they are our creations, is it ethical to keep them as slaves? You wouldn't put your children in chains, just cause they one day 'might' turn on you?
Eikka
5 / 5 (1) Aug 17, 2015
(I mean a sufficiently advanced AI running on a sufficiently advanced hardware would think so fast, it could think in one second what it would take for us until the sun dies down)


That's wishful thinking.

For example, at 3 GHz the quarter-wave lenght of a signal in silicon is about half an inch, which is on the same scale as the CPU chip itself. Any faster, or any bigger a chip, and the synchronization between its parts becomes difficult, which is why CPUs are stuck at around 3-4 Ghz.

At another scale, signal delays between two "distant" CPUs become the limiting factor. For example, the link between the CPU and RAM is notoriously slow and limits the size of problems that can be efficiently calculated.

Obviously, making the computer large makes it slower, and making it smaller and faster has physical limits, which suggests that there's a tradeoff between "intelligence" and "thinking speed". You can do fast, or complex, but not fast and complex.
Bill_Collins
not rated yet Aug 17, 2015
Eikka - Are you seriously constraining the future power of AI based on our current chip architecture? Yes of course there are limits and eventually distant communication does constrain performance. But to exceed the capabilities of the human brain from a hardware perspective is in our near future. Even our current slow 3-4 Ghz CPU's can communicate many times faster than neurons. CPU's will be clustered to solve smaller problems locally. Communications to a distant cluster is only needed as information flows out or decisions bubble up. Even if we simply copied the architecture of our brains into electronic form, it would likely be hundreds of times faster than our biological version.
Returners
1 / 5 (2) Aug 18, 2015
Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame – or, more accurately, get legal redress from – when something goes wrong.

...

Will AI systems really be like Isaac Asimov's Bicentennial Man – obedient to the law, with a moral conscience and a hefty bank balance?


I am convinced that A.I. systems will one day not only be capable of sentience, but also emotion and personal moral choice.

Think about how humans are. No matter what the regulations, someone will eventually gain both the knowledge and the will to make a true "Humans"(amc series) style A.I., and we know that real humans have pretty well used every piece of knowledge we've ever gained, whether for good or evil, at one time or another, regardless of regulation.

Even pets have emotional states and some form of moral convictions; your cat or dog is loyal to you, but may be suspicious and even hostile to strangers.
Returners
3 / 5 (2) Aug 18, 2015
Our biological brains are "fuzzy" because neurotransmitters are imprecise, which is why we have redundant neurons, and sometimes neural connections which are unexpected; memories linked in unexpected ways, etc.

Human emotion may require the ability to make a mistake.
bluehigh
not rated yet Aug 18, 2015
.. and sometimes neural connections which are unexpected; memories linked in unexpected ways,


Perchance to dream.

Simulated electric sheeples don't dream.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.