Robots can't kill you – claiming they can is dangerous

July 6, 2015 by Ron Chrisley, The Conversation
Credit: Jiuguang Wang/flickr, CC BY-SA

Robots' involvement in human deaths is nothing new. The recent death of a man who was grabbed by a robot and crushed against a metal plate at a Volkswagen factory in Baunatal, Germany, attracted extensive media attention. But it is strikingly similar to one of the first recorded case of a death involving an industrial robot 34 years ago.

These incidents have happened before and will happen again. Even if safety standards continue to rise and the chance of an accident happening in any given human/robotic interaction goes down, such events will become more frequent simply because of the ever-increasing number of robots.

This means it is important to understand this kind of incident properly, and a key part of doing so is using accurate and appropriate language to describe them. Although there is a sense in which it is legitimate to refer to the Baunatal incident as a case of " kills worker", as many reports have done, it is misleading, verging on the irresponsible, to do so. It would be much better to express it as a case of "worker killed in robot accident".

Admittedly, putting it that way isn't as eye-grabbing, but that's precisely the point. The fact is, robots, despite what one might be encouraged to believe from sci-fi, and despite what may happen in the far future, currently lack what we consider real intentions, emotions and purposes. And contrary to recent alarmist claims, nor are they going to acquire those capacities in the near future.

They can only "kill" in the sense that a hurricane (or a car, or a gun) can kill. They can't kill in the sense that some animals can, let alone in the human sense of murder. Yet murder is likely to be what springs to most people's minds when they read "robot kills worker".

High stakes

Insisting on getting this language right isn't an academic exercise in pedantry. The stakes are high. For one thing, an unwarranted fear of robots could lead to another unnecessary "artificial intelligence winter", a period where the technology ceases to receive research funding. This would delay or deny the considerable benefits robots can bring not just to industry but society in general.

But even if you're not optimistic about the benefits of robots, you should still want to get this issue right. Since robots don't have responsibility, humans are the ones responsible for what robots do. However, as robots become more prevalent, it will increasingly appear as if they actually have their own autonomy and intentions, for which it will seem they can and should be held responsible.

Although there may eventually come a day when that appearance is matched by reality, there will be a long period of time (which has already begun) when this appearance will be false. Even now we are already tempted to categorise our interactions with robots into what we are responsible for and what they are responsible for. This raises the danger of scapegoating the robot, and failing to hold the human designers, deployers and users involved fully responsible.

Moral robots or morally made robots?

It's not just those reporting on robots that need to get the language right. Policymakers, salespeople, and those in research and development who are designing the robots of today and tomorrow need to keep a clear head. Instead of asking "what's the best way to make moral robots?", we should ask "what's the best way to morally make robots?".

This subtle change in the language, if adopted, would result in big changes in design. For example, trying to give robots moral laws to follow would require us to provide them with a human-like level of common sense to apply those laws, something that would be far harder. Instead of following such a design dead end we could aim for machines that are a results of the designers' own morals, just as we try to ethically design non-robotic technology.

In the Volkswagen accident, a company spokesperson reportedly said "initial conclusions indicate that human error was to blame, rather than a problem with the robot". Other reports spoke of it being human error rather than the robot "being at fault" or "accountable". This implies that, in other circumstances, the robot could have been considered to blame for the accident.

If there was a "problem with the robot", be it faulty materials, a misperforming circuit board, bad programming, poor design of installation or operational protocols, that problem – or not anticipating it – would still have been due to . Yes, there are industrial accidents where no human or group of humans is to blame. But we mustn't be tempted by the appearance of agency in robots to absolve their human creators of responsibility. Not yet anyway.

Explore further: Video: Smart assembly line robots that learn from experience working alongside humans

Related Stories

Four reasons why the Terminator is already here

July 1, 2015

As Terminator: Genisys hits cinemas around the world, ScienceNetwork WA looks at some of the feats performed by robots in the Terminator films, and investigates how long until reality catches up with science fiction.

Evolving robot brains

March 2, 2015

Researchers are using the principles of Darwinian evolution to develop robot brains that can navigate mazes, identify and catch falling objects, and work as a group to determine in which order they should exit and re-enter ...

Standard knowledge for robots

May 20, 2015

What do you know? There is now a world standard for capturing and conveying the knowledge that robots possess—or, to get philosophical about it, an ontology for automatons.

Recommended for you

Researchers engineer a tougher fiber

February 22, 2019

North Carolina State University researchers have developed a fiber that combines the elasticity of rubber with the strength of a metal, resulting in a tougher material that could be incorporated into soft robotics, packaging ...

A quantum magnet with a topological twist

February 22, 2019

Taking their name from an intricate Japanese basket pattern, kagome magnets are thought to have electronic properties that could be valuable for future quantum devices and applications. Theories predict that some electrons ...

4 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Stevepidge
2.5 / 5 (2) Jul 06, 2015
What an utterly irresponsible robotics and AI industry puff piece. The sheer fact that humans are the creators of robots insures that they will indeed kill. Go to sleep sheep the robots mean you no harm whatsoever.
EWH
4 / 5 (1) Jul 06, 2015
If your robots can't kill people then I humbly suggest that you may be missing some really awesome opportunities.
docile
Jul 06, 2015
This comment has been removed by a moderator.
xstos
5 / 5 (1) Jul 06, 2015
Robots cannot kill on purpose because there are no sentient robots is what this confused article is trying to get across and failing.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.