Want responsible robotics? Start with responsible humans

Jul 29, 2009 by Pam Frost Gorder

(PhysOrg.com) -- When the legendary science fiction writer Isaac Asimov penned the "Three Laws of Responsible Robotics," he forever changed the way humans think about artificial intelligence, and inspired generations of engineers to take up robotics.

In the current issue of journal IEEE Intelligent Systems, two engineers propose alternative laws to rewrite our future with robots.

The future they foresee is at once safer, and more realistic.

"When you think about it, our cultural view of robots has always been anti-people, pro-robot," explained David Woods, professor of integrated systems engineering at Ohio State University. "The philosophy has been, 'sure, people make mistakes, but robots will be better -- a perfect version of ourselves.' We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways."

Asimov's laws are iconic not only among engineers and enthusiasts, but the general public as well. The laws often serve as a starting point for discussions about the relationship between humans and robots.

But while evidence suggests that Asimov thought long and hard about his laws when he wrote them, Woods believes that the author did not intend for engineers to create robots that followed those laws to the letter.

"Go back to the original context of the stories," Woods said, referring to Asimov's I, among others. "He's using the three laws as a literary device. The plot is driven by the gaps in the laws -- the situations in which the laws break down. For those laws to be meaningful, robots have to possess a degree of social intelligence and moral intelligence, and Asimov examines what would happen when that intelligence isn't there."

"His stories are so compelling because they focus on the gap between our aspirations about robots and our actual capabilities. And that's the irony, isn't it? When we envision our future with robots, we focus on our hopes and desires and aspirations about robots -- not reality."

In reality, engineers are still struggling to give robots basic vision and language skills. These efforts are hindered in part by our lack of understanding of how these skills are managed in the human brain. We are far from a time when humans may teach robots a moral code and responsibility.

Woods and his coauthor, Robin Murphy of Texas A&M University, composed three laws that put the responsibility back on humans.

Woods directs the Cognitive Systems Engineering Laboratory at Ohio State, and is an expert in automation safety. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M, and is an expert in both rescue robotics and human-robot interaction.

Together, they composed three laws that focus on the human organizations that develop and deploy robots. They looked for ways to ensure high safety standards.

Here are Asimov's original three laws:

  • A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  • A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And here are the three new laws that Woods and Murphy propose:
  • A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
  • A robot must respond to humans as appropriate for their roles.
  • A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws.
The new first law assumes the reality that humans deploy robots. The second assumes that robots will have limited ability to understand human orders, and so they will be designed to respond to an appropriate set of orders from a limited number of humans.

The last law is the most complex, Woods said.

"Robots exist in an open world where you can't predict everything that's going to happen. The robot has to have some autonomy in order to act and react in a real situation. It needs to make decisions to protect itself, but it also needs to transfer control to humans when appropriate. You don't want a robot to drive off a ledge, for instance -- unless a human needs the robot to drive off the ledge. When those situations happen, you need to have smooth transfer of control from the robot to the appropriate human," Woods said.

"The bottom line is, robots need to be responsive and resilient. They have to be able to protect themselves and also smoothly transfer control to humans when necessary."

Woods admits that one thing is missing from the new laws: the romance of Asimov's fiction -- the idea of a perfect, moral that sets engineers' hearts fluttering.

"Our laws are little more realistic, and therefore a little more boring," he laughed.

More information:

doi.ieeecomputersociety.org/10.1109/MIS.2009.69
en.wikipedia.org/wiki/Three_Laws_of_Robotics

Source: The Ohio State University (news : web)

Explore further: Japan orders air bag maker to conduct probe

add to favorites email to friend print save as pdf

Related Stories

Japan creates Asimov-like robotic laws

May 31, 2006

Japan is creating "robotic laws" along the lines envisioned by scientist Isaac Asimov in the Laws of Robotics he presented in a 1940 science fiction novel.

Living Safely with Robots, Beyond Asimov's Laws

Jun 22, 2009

(PhysOrg.com) -- "In 1981, a 37-year-old factory worker named Kenji Urada entered a restricted safety zone at a Kawasaki manufacturing plant to perform some maintenance on a robot. In his haste, he failed ...

Futuristic robots, friend or foe?

Apr 22, 2008

A leading robotics expert will outline some of the ethical pitfalls of near-future robots to a Parliamentary group today at the House of Commons. Professor Noel Sharkey from the University of Sheffield will explain that robots ...

Androids might soon become science fact

May 09, 2006

New Zealand and European scientists say it's time to send R2-D2 back to science fiction land -- and get ready to greet androids that think and act like humans.

Recommended for you

Japan orders air bag maker to conduct probe

Nov 21, 2014

Japan's transport ministry said Friday it has ordered air bag maker Takata to conduct an internal investigation after cases of its air bags exploding triggered safety concerns in the United States and other countries.

Senators get no clear answers on air bag safety

Nov 20, 2014

There were apologies and long-winded explanations, but after nearly four hours of testimony about exploding air bags, senators never got a clear answer to the question most people have: whether or not their ...

Winter-like temps can reduce tire pressure

Nov 19, 2014

The polar plunge that has chilled much of the nation does more than bring out ice scrapers and antifreeze. It can trigger vehicles' tire pressure monitoring systems overnight, sending nervous drivers to dealers ...

US: Gov't aircraft regulations apply to drones (Update)

Nov 18, 2014

The U.S. government has the power to hold drone operators accountable when they operate the remote-control aircraft recklessly, a federal safety board ruled Tuesday in a setback to small drone operators chafing ...

Mapping the crisis of displaced peoples

Nov 17, 2014

Population displacement is a global problem, one that historically has been insufficiently quantified and analyzed, especially given its wide-ranging effects. Displacement can result from a number of factors, ...

User comments : 10

Adjust slider to filter visible comments by rank

Display comments: newest first

docknowledge
4 / 5 (3) Jul 29, 2009
Having worked in Artificial Intelligence for many years, I can say flatly that Asimov's laws aren't the place to start almost any scholarly discussion. They are a crock, and so obviously flawed that Asimov was immediately able to write stories about how unworkable they are.

The fact that the public thinks they are somehow important is ... evidence of why some issues should never be put to a popular vote.

Woods and Murphy haven't improved the laws at all. Have they ever talked with a lawyer? What "high legal standards" are lawyers supposed to embody? Oh, the sneaky, self-serving, combative, expensive ones.

The rest of Woods and Murphy is equally naive and unworkable. I am so thankful I didn't have to go to Ohio State.
Izzmo
not rated yet Jul 29, 2009
Having worked in Artificial Intelligence for many years, I can say flatly that Asimov's laws aren't the place to start almost any scholarly discussion. They are a crock, and so obviously flawed that Asimov was immediately able to write stories about how unworkable they are.



The fact that the public thinks they are somehow important is ... evidence of why some issues should never be put to a popular vote.



Woods and Murphy haven't improved the laws at all. Have they ever talked with a lawyer? What "high legal standards" are lawyers supposed to embody? Oh, the sneaky, self-serving, combative, expensive ones.



The rest of Woods and Murphy is equally naive and unworkable. I am so thankful I didn't have to go to Ohio State.

You didn't say anything in your little speech as to why they will and cannot work... so why?
just_doug
5 / 5 (2) Jul 29, 2009
Do those guys get significant military funding? The despots define legal and professional standards and also decide what your role is and therefore if any robot should pay any attention to what you had to say. Cannon fodder, serf or tax revenue producer aren't likely to be listened to, and those are the main roles in modern society.
docknowledge
1 / 5 (1) Jul 30, 2009
Little speech? I thought it was a great speech.

Heh, kidding. It's just an issue I've been regularly confronted with for years. So I've had a chance to think it through.

There are problems with the three laws...coming from several different angles. Let's go for something that's pretty to the point...

Assume I am a terrorist. I live in Iran or N. Korea. I don't want a robot that follows the first law. I want a robot that kills as many people as possible before it is taken down. The more Americans that die, the better.

No "first law".
RayCherry
5 / 5 (1) Jul 30, 2009
Obviously Robots will not require Political Correctness either. You live in a place where the laws apply only if you get caught and cannot afford an effective judicial defence. What consequences will Robots face when they (also) conveniently forget the laws?
wiyosaya
5 / 5 (1) Jul 30, 2009
I have to agree that these laws are a crock, and Asimov's laws were a literary gimmick, far from anything that will be realistically possible for years to come.

The third law that these "professionals" state is an extreme example of the "crockedness" of these laws. To me, it implies an extreme of artificial intelligence that real technology, i.e., technology that is in everyday use, is nowhere near achieving.

So, the robot makes a decision about protecting its own existence. Just what does this mean? If you tell the robot to power down and turn off, does the robot refuse because turning off would endanger its existence?

Technology is nowhere near the level of artificial intelligence required for a robot to be able to make such decisions. Such decisions are currently made by the program that the robot's computer is executing. I am willing to bet that even with the world's most advanced robots, i.e., those that possess the highest levels of artificial intelligence, that it would be an extraordinary effort to even get them to recognize situations where their existence would be in danger, sensory input aside. Given the right sensory input, a robot could be programmed to stop if it senses that there is a cliff in front of it. In that case, though, the robot itself does not make the decision, nor does it need "artificial intelligence" to make the decision. It is the programmer that made the decision for the robot. To have this done by A.I., though, sounds like an immense challenge, and it certainly seems like there are things on which the time would be better spent.
COCO
5 / 5 (1) Jul 31, 2009
did no one see Terminator!! Rules are for girlie boys - soon Amerika will be able to replace its regular armed goons with robots who kill based on skin colour - productivity improvememt for sure - what are we going to do with those red-neck psychos then?
ketanco
not rated yet Jul 31, 2009
These "laws" mean nothing, when there is no AI that "understands" them... These laws are only for us to understand - at least until we have a human level AI.
docknowledge
not rated yet Aug 01, 2009
ketanco, good thinking, except the laws themselves don't make sense -- there's nothing to "understand". They are inherently contradictory.

Having worked with NASA, and being a science-fiction fan, I was rather surprised bumping into major sci-fi authors a couple times. Two of them couldn't have cared less that I had technical information related to their work. (Shrug of the shoulders, "It's fiction.") One who I did not meet, Arthur C. Clarke, was a keen fan of a project I was associated with. Although Clarke had the intelligence and interest to understand the project (which superficially was not very technical), all I ever saw him do was parrot the official project documentation put out by the project marketing. (Material which I came to realize was quite misleading.)

The moral of the story is: the three laws are fiction.
docknowledge
not rated yet Aug 04, 2009
This quote came from a BBC article today:

"The problem is that this is all based on artificial intelligence, and the military have a strange view of artificial intelligence based on science fiction."

http://news.bbc.c...2003.stm

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.