Asimov's robots live on twenty years after his death

April 9, 2012 By Alan S. Brown, Inside Science News Service
The cover of Asimov's "Runaround"

Renowned author Isaac Asimov died 20 years ago today. Although he wrote more than 500 books, the robot stories he began writing at age 19 are possibly his greatest accomplishment. They have become the starting point for any discussion about how smart robots will behave around humans.

The issue is no longer theoretical. Today, work in warehouses and factories. plans to make them jailers. To date, Google's fleet of autonomous cars have driven individuals, including a legally blind man, more than 200,000 miles through cities and highways with little .

Several nations, notably the United States, South Korea, and Israel, operate and autonomous land robots. Many are armed. The day is fast approaching when semiconductor circuits may make life-and-death decisions based on mathematical algorithms.

Robots were still new when Asimov began writing about them in 1939. The word was first used in a play by Czech playwright Karel Capek in 1920, the same year Asimov was born. The teenaged Asimov saw them in pulp magazines sold in the family candy store in Brooklyn. Lurid drawings depicted them turning on their creators, usually while threatening a scantily clad female.

Asimov preferred stories that portrayed robots sympathetically.

"I didn't think a should be sympathetic just because it happened to be nice," he wrote in a 1980 essay. "It should be engineered to meet certain safety standards as any other machine should in any right-thinking technological society. I therefore began to write stories about robots that were not only sympathetic, but were sympathetic because they couldn't help it."

That idea infused Asimov's first robot stories. His editor, John Campbell of "Astounding Science-Fiction," wrote down a list of rules Asimov's robots obeyed. They became Asimov's Three Laws of Robotics:

• A robot must not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where those orders would conflict with the First Law.
• A robot must protect its own existence, except where such protection would conflict with the First or Second Law.

Like most things Asimov wrote, the Three Laws were clear, direct, and logical. Asimov's stories, on the other hand, told how easily they could fail.

In "Liar," a telepathic robot lies rather than hurt people's feelings. Ultimately, the lies create havoc and break the heroine's heart.

In "Roundabout," a robot must risk danger to follow an order. When it nears the threat, it pulls back to protect itself. Once safe, it starts to follow orders again. The robot keeps repeating this pattern. The hero finally breaks the loop by deliberately putting himself in danger, forcing the robot to default to the First Law and save his life.

Asimov's robots are adaptive and sometimes reprogram themselves. One develops an emotional attachment to its creator. Another finds a logical reason to turn on humans. A robot isolated on a space station decides humans do not exist and develops its own religion. Another robot sues to be declared a person.

The contradictions in Asimov's laws encouraged others to propose new rules. One proposed that human-looking robots always identify themselves as robots. Another argued that robots must always know they are robots. A third, tongue in cheek, proposed that robots only kill enemy soldiers.

Michael Anissimov, of the Singularity Institute for Artificial Intelligence, a Silicon Valley think tank founded to develop safe AI software, argued that any set of rules will always have conflicts and grey areas.

In a 2004 essay, he wrote, "it's not so straightforward to convert a set of statements into a mind that follows or believes in those statements."

A robot might misapply laws in complex situations, especially if it did not understand why the law was created, he said. Also, robots might modify rules in unexpected ways as they re-program themselves to adapt to new circumstances.

Instead of rules, Anissimov believes we must create "friendly AI" that loves humanity.

While truly intelligent robots are decades away, autonomous robots are already making decisions independently. They require a different set of rules, according to Texas A&M computer scientist Robin Murphy and Ohio State Cognitive Systems Engineering Lab director David Woods.

In a 2009, they proposed three laws to govern autonomous robots. The first assumes that since humans deploy robots, human-robot systems must meet high safety and ethical standards.

The second asserts robots must obey appropriate commands, but only from a limited number of people. The third says that robots must protect themselves, but only after they transfer control of whatever they are doing (like driving a bus or running a machine) to humans.

The debate continues. In 2007, South Korea announced plans to publish a charter of human-robot ethics. It is likely to address such expert-identified issues as human addiction to robots (which could mimic how humans respond to video games or smartphones), human-robot sex and safety. The European Robotics Research Network is considering similar issues.

This discussion would have happened with or without . Yet his Three Laws -- and their limits -- have certainly shaped the debate.

Explore further: Want responsible robotics? Start with responsible humans

Related Stories

Want responsible robotics? Start with responsible humans

July 29, 2009

( -- When the legendary science fiction writer Isaac Asimov penned the "Three Laws of Responsible Robotics," he forever changed the way humans think about artificial intelligence, and inspired generations of engineers ...

Living Safely with Robots, Beyond Asimov's Laws

June 22, 2009

( -- "In 1981, a 37-year-old factory worker named Kenji Urada entered a restricted safety zone at a Kawasaki manufacturing plant to perform some maintenance on a robot. In his haste, he failed to completely turn ...

iRobot planning an Android-based robot

May 12, 2011

( -- iRobot is working on robots that have the brains of an Android tablet. The goal is an Android-based tablet that is able to see the world around it, hear input from humans, respond and think about the next ...

Kilobots bring us one step closer to a robot swarm

June 17, 2011

( -- When you think about robots, the odds are that you think about something that is fairly large. Maybe you picture a robot arms bolted to the floor of a factory or if you are feeling particularly dramatic maybe ...

Recommended for you

Nanoscale Lamb wave-driven motors in nonliquid environments

March 19, 2019

Light driven movement is challenging in nonliquid environments as micro-sized objects can experience strong dry adhesion to contact surfaces and resist movement. In a recent study, Jinsheng Lu and co-workers at the College ...

OSIRIS-REx reveals asteroid Bennu has big surprises

March 19, 2019

A NASA spacecraft that will return a sample of a near-Earth asteroid named Bennu to Earth in 2023 made the first-ever close-up observations of particle plumes erupting from an asteroid's surface. Bennu also revealed itself ...

The powerful meteor that no one saw (except satellites)

March 19, 2019

At precisely 11:48 am on December 18, 2018, a large space rock heading straight for Earth at a speed of 19 miles per second exploded into a vast ball of fire as it entered the atmosphere, 15.9 miles above the Bering Sea.

Revealing the rules behind virus scaffold construction

March 19, 2019

A team of researchers including Northwestern Engineering faculty has expanded the understanding of how virus shells self-assemble, an important step toward developing techniques that use viruses as vehicles to deliver targeted ...

Levitating objects with light

March 19, 2019

Researchers at Caltech have designed a way to levitate and propel objects using only light, by creating specific nanoscale patterning on the objects' surfaces.


Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (1) Apr 09, 2012
These rules (loving humans, etc) HAVE to be engineered into the central processing unit and hard-wired as much as possible. The actual architecture of the chip has to reinforce this "loving" programming and not allow alternative programs to be loaded that would go against this native behavior. To do otherwise would invite disaster IMHO.

A couple of geek whacks, terrorists, or a country hell-bent to destroy perceived enemies (i.e. Iran vs Israel), builds their own code or hacks existing code and all bets are off. A robot capable of hating and destroying humans is just as easy to program (once the AI is available to do this) as a "loving" robot without safeguards deep inside a processing core. Asimov's comment, "any right-thinking technological society", is very telling. I don't think our society is "right-thinking" anymore, at least not the ones in charge of our Intel/military establishments. It's all about a one-up on the bad guys, they will do it so we MUST do it first!

My 2 cents
not rated yet Apr 09, 2012
The flaw for Anissomov's request is that this has already been tried. For what he asks for is no different than this:

Love the creator with all your heart, soul, mind strength, and love your neighbor as yourself.

Whether human or ingrained into an AI robot, the fact that if they are not autonomous means they are capable of rejection and disobedience to the initial programming.

Easter was the answer to the problem. The only answer to having fellowship between creator and AI creation.

Apr 10, 2012
This comment has been removed by a moderator.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.