If you want to trust a robot, look at how it makes decisions

Mar 11, 2014 by Michael Fisher, The Conversation
Your robot’s decisions will be less of a shock if you plan ahead. x-ray delta one, Credit: CC BY-SA

Robots, and autonomous systems in general, can cause anxiety and uncertainty, particularly as their use in everyday tasks becomes a more immediate possibility. In order to lessen at least some of that anxiety, we should shift our focus from the decisions robots could make on our behalf to how they actually make them in the first place. In some ways, they may be more trustworthy than a human.

Like it or not, autonomous systems are here and here to stay. By "autonomy" we mean the ability of a system to make its own decisions about what to do and when to do it. So far, most of the examples you might have come across, such as robot vacuum cleaners, aircraft autopilots and automated parking systems in your car, are simple and not even particularly autonomous. These systems adapt to their environment, responding automatically to environmental changes. They are pre-programmed to adapt to environmental stimuli.

But old science fiction stories warn us about systems that go further. What worries us is what happens when a human pilot, driver or operator is replaced by software that makes its own choices about what to do.

In air travel, an autopilot system can keep an aircraft flying on a certain path, but there is a human pilot deciding which path to take, when to divert, and how to deal with unexpected situations. Similarly, cruise control, lane control and soon convoying will allow our cars to carry out path following activities though drivers will continue to make the big decisions.

But once we move to truly autonomous systems, software will play a much bigger part. We will no longer need a human to decide when to change the route of an aircraft or when to turn off the motorway onto a side road.

It is at this point that many of us start to worry. If a machine can truly make its own decisions then how do we know it is safe? After seeing movies such as Terminator, we wonder how we can trust machines not to double-cross us. For many, the idea of boarding an aeroplane that flies itself is unnerving, let alone the thought of allowing an assistant in their home.

It can feel like we face two options. Either we blindly trust these machines or we refuse to use autonomous systems at all. But a third option is possible.

When we deal with another human, we can't be sure what they will decide but we make assumptions based on what we think of them. We consider whether that person has lied to us in the past or has a record for making mistakes. But we can't really be certain about any of our assumptions as the other person could still deceive us.

Our autonomous systems, on the other hand, are essentially controlled by software so if we can isolate the software that makes all the high-level decisions – those decisions that a human would have made – then we can analyse the detailed working of these programs. That's not something you can or possibly ever could easily do with a human brain.

A popular strand of research in computer science is concerned with deep analysis of software and, in particular, providing logical proofs that the software will always match its formal requirements. This approach, called formal verification is particularly useful for analysing and evaluating the kind of critical software that can impact upon human safety, such as within power-stations, life-support systems or transportation systems. So, once we isolate the software making the high-level decisions within our autonomous systems, we can then analyse the detailed working of these programs through formal verification.

Our research concerns exactly this formal verification of autonomous system behaviours and, in some cases, we can prove that the controlling our system will never make bad decisions.

The process of decision-making is central to making a robot trustworthy. That's how we can be more comfortable with the eventual outcomes of those decisions. The environments in which such systems work are typically both complex and uncertain. So while accidents can still occur, we can at least be sure that the system always tries to avoid them. This might seem insufficient, but it allows us to tackle some of the those old science fiction concerns.

While we cannot say that a robot will never accidentally harm someone, through formal verification we might well be able to prove that the robot never intentionally means to cause harm. By looking at the system's internal programming, we can often assess not just what it decides to do, but why it decided to do it. Although people will still be wary of robots and autonomous systems, if they know that the system is not actively intending to double-cross them, then they are likely to trust such systems a little more.

The main problem now becomes what to prove, rather than how to prove it. We might try to prove that a robot never deliberately chooses to harm a human. But what about medical robots or police robots? A medical robot might try to resuscitate someone, for example by exerting pressure on their chest. But that might inadvertently harm them.

A police robot is charged with protecting the public but what if a criminal is shooting a gun at someone? Can the robot harm the criminal in order to avert the greater danger?

We are clearly moving on from technical questions towards philosophical and ethical questions about what behaviour we find acceptable and what ethical behaviour our robots should exhibit. Having a clear view of how our robots are programmed to make decisions will help us as we try to make decisions on both fronts.

Explore further: Helping robots learn to walk

add to favorites email to friend print save as pdf

Related Stories

Making robots more trustworthy

Jul 03, 2013

Researchers from the University of Hertfordshire are part of a new £1.2 million project that aims to ensure that future robotic systems can be trusted by humans.

Helping robots learn to walk

Feb 28, 2014

Fully autonomous robots could transform the way we live, but so far such machines remain beyond the reach of our most advanced technologies. Existing robots are generally limited to performing simple, well-structured ...

Ban 'killer robots,' rights group urges

Nov 19, 2012

Hollywood-style robots able to shoot people without permission from their human handlers are a real possibility and must be banned before governments start deploying them, campaigners warned Monday.

Recommended for you

Firmer footing for robots with smart walking sticks

8 hours ago

Anyone who has ever watched a humanoid robot move around in the real world—an "unstructured environment," in research parlance—knows how hard it is for a machine to plan complex movements, balance on ...

Knightscope K5 on security patrol roams campus

Nov 24, 2014

A Mountain View, California-based company called Knightscope designs and builds 5-feet, 300-pound security guards called K5, but anyone scanning last week's headlines has already heard about them, with the ...

Robots take over inspection of ballast tanks on ships

Nov 24, 2014

A new robot for inspecting ballast water tanks on board ships is being developed by a Dutch-German partnership including the University of Twente. The robot is able to move independently along rails built ...

User comments : 0

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.