Asimov's robots live on twenty years after his death

Apr 09, 2012 By Alan S. Brown
The cover of Asimov's "Runaround"

Renowned author Isaac Asimov died 20 years ago today. Although he wrote more than 500 books, the robot stories he began writing at age 19 are possibly his greatest accomplishment. They have become the starting point for any discussion about how smart robots will behave around humans.

The issue is no longer theoretical. Today, work in warehouses and factories. plans to make them jailers. To date, Google's fleet of autonomous cars have driven individuals, including a legally blind man, more than 200,000 miles through cities and highways with little .

Several nations, notably the United States, South Korea, and Israel, operate and autonomous land robots. Many are armed. The day is fast approaching when semiconductor circuits may make life-and-death decisions based on mathematical algorithms.

Robots were still new when Asimov began writing about them in 1939. The word was first used in a play by Czech playwright Karel Capek in 1920, the same year Asimov was born. The teenaged Asimov saw them in pulp magazines sold in the family candy store in Brooklyn. Lurid drawings depicted them turning on their creators, usually while threatening a scantily clad female.

Asimov preferred stories that portrayed robots sympathetically.

"I didn't think a should be sympathetic just because it happened to be nice," he wrote in a 1980 essay. "It should be engineered to meet certain safety standards as any other machine should in any right-thinking technological society. I therefore began to write stories about robots that were not only sympathetic, but were sympathetic because they couldn't help it."

That idea infused Asimov's first robot stories. His editor, John Campbell of "Astounding Science-Fiction," wrote down a list of rules Asimov's robots obeyed. They became Asimov's Three Laws of Robotics:

• A robot must not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where those orders would conflict with the First Law.
• A robot must protect its own existence, except where such protection would conflict with the First or Second Law.

Like most things Asimov wrote, the Three Laws were clear, direct, and logical. Asimov's stories, on the other hand, told how easily they could fail.

In "Liar," a telepathic robot lies rather than hurt people's feelings. Ultimately, the lies create havoc and break the heroine's heart.

In "Roundabout," a robot must risk danger to follow an order. When it nears the threat, it pulls back to protect itself. Once safe, it starts to follow orders again. The robot keeps repeating this pattern. The hero finally breaks the loop by deliberately putting himself in danger, forcing the robot to default to the First Law and save his life.

Asimov's robots are adaptive and sometimes reprogram themselves. One develops an emotional attachment to its creator. Another finds a logical reason to turn on humans. A robot isolated on a space station decides humans do not exist and develops its own religion. Another robot sues to be declared a person.

The contradictions in Asimov's laws encouraged others to propose new rules. One proposed that human-looking robots always identify themselves as robots. Another argued that robots must always know they are robots. A third, tongue in cheek, proposed that robots only kill enemy soldiers.

Michael Anissimov, of the Singularity Institute for Artificial Intelligence, a Silicon Valley think tank founded to develop safe AI software, argued that any set of rules will always have conflicts and grey areas.

In a 2004 essay, he wrote, "it's not so straightforward to convert a set of statements into a mind that follows or believes in those statements."

A robot might misapply laws in complex situations, especially if it did not understand why the law was created, he said. Also, robots might modify rules in unexpected ways as they re-program themselves to adapt to new circumstances.

Instead of rules, Anissimov believes we must create "friendly AI" that loves humanity.

While truly intelligent robots are decades away, autonomous robots are already making decisions independently. They require a different set of rules, according to Texas A&M computer scientist Robin Murphy and Ohio State Cognitive Systems Engineering Lab director David Woods.

In a 2009, they proposed three laws to govern autonomous robots. The first assumes that since humans deploy robots, human-robot systems must meet high safety and ethical standards.

The second asserts robots must obey appropriate commands, but only from a limited number of people. The third says that robots must protect themselves, but only after they transfer control of whatever they are doing (like driving a bus or running a machine) to humans.

The debate continues. In 2007, South Korea announced plans to publish a charter of human-robot ethics. It is likely to address such expert-identified issues as human addiction to robots (which could mimic how humans respond to video games or smartphones), human-robot sex and safety. The European Robotics Research Network is considering similar issues.

This discussion would have happened with or without . Yet his Three Laws -- and their limits -- have certainly shaped the debate.

Explore further: Socially-assistive robots help kids with autism learn by providing personalized prompts

Source: Inside Science News Service

4.8 /5 (6 votes)

Related Stories

Want responsible robotics? Start with responsible humans

Jul 29, 2009

(PhysOrg.com) -- When the legendary science fiction writer Isaac Asimov penned the "Three Laws of Responsible Robotics," he forever changed the way humans think about artificial intelligence, and inspired generations of engineers ...

Living Safely with Robots, Beyond Asimov's Laws

Jun 22, 2009

(PhysOrg.com) -- "In 1981, a 37-year-old factory worker named Kenji Urada entered a restricted safety zone at a Kawasaki manufacturing plant to perform some maintenance on a robot. In his haste, he failed ...

iRobot planning an Android-based robot

May 12, 2011

(PhysOrg.com) -- iRobot is working on robots that have the brains of an Android tablet. The goal is an Android-based tablet that is able to see the world around it, hear input from humans, respond and think ...

Kilobots bring us one step closer to a robot swarm

Jun 17, 2011

(PhysOrg.com) -- When you think about robots, the odds are that you think about something that is fairly large. Maybe you picture a robot arms bolted to the floor of a factory or if you are feeling particularly ...

Recommended for you

Ride-sharing could cut cabs' road time by 30 percent

6 hours ago

Cellphone apps that find users car rides in real time are exploding in popularity: The car-service company Uber was recently valued at $18 billion, and even as it faces legal wrangles, a number of companies ...

Jumping into streaming TV

8 hours ago

More TV viewers are picking up so-called streaming media boxes in the hope of fulfilling a simple wish: Let me watch what I want when I want.

Job listing service ZipRecruiter raises $63 million

8 hours ago

ZipRecruiter, a California start-up that tries to simplify tasks for recruiters, has raised $63 million in initial venture capital funding as the 4-year-old service races to keep up with growing demand.

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

Raygunner
5 / 5 (1) Apr 09, 2012
These rules (loving humans, etc) HAVE to be engineered into the central processing unit and hard-wired as much as possible. The actual architecture of the chip has to reinforce this "loving" programming and not allow alternative programs to be loaded that would go against this native behavior. To do otherwise would invite disaster IMHO.

A couple of geek whacks, terrorists, or a country hell-bent to destroy perceived enemies (i.e. Iran vs Israel), builds their own code or hacks existing code and all bets are off. A robot capable of hating and destroying humans is just as easy to program (once the AI is available to do this) as a "loving" robot without safeguards deep inside a processing core. Asimov's comment, "any right-thinking technological society", is very telling. I don't think our society is "right-thinking" anymore, at least not the ones in charge of our Intel/military establishments. It's all about a one-up on the bad guys, they will do it so we MUST do it first!

My 2 cents
Yellowdart
not rated yet Apr 09, 2012
The flaw for Anissomov's request is that this has already been tried. For what he asks for is no different than this:

Love the creator with all your heart, soul, mind strength, and love your neighbor as yourself.

Whether human or ingrained into an AI robot, the fact that if they are not autonomous means they are capable of rejection and disobedience to the initial programming.

Easter was the answer to the problem. The only answer to having fellowship between creator and AI creation.



sstritt
Apr 10, 2012
This comment has been removed by a moderator.