Scientists, lawyers mull effects of home robots

(AP) -- Eric Horvitz illustrates the potential dilemmas of living with robots by telling the story of how he once got stuck in an elevator at Stanford Hospital with a droid the size of a washing machine.

"I remembered thinking, 'Whoa, this is scary,' as it whirled around, almost knocking me down," the Microsoft researcher recalled. "Then, I thought, 'What if I were a patient?' There could be big issues here."

We're still far from the sci-fi dream of having robots whirring about and catering to our every need. But little by little, we'll be sharing more of our space with robots in the next decade, as prices drop and new technology creates specialized machines that clean up spilled milk or even provide comfort for an elderly parent.

Now scientists and legal scholars are exploring the likely effects. What happens if a robot crushes your foot, chases your cat off a ledge or smacks your baby? While experts don't expect a band of Terminators to attack or a "2001: A Space Odyssey" computer that takes control, even simpler, benign robots will have legal, social and ethical consequences.

"As we rely more and more on automated systems, we have to think of the implications. It is part of being a responsible scientist," Horvitz said.

Horvitz assembled a team of scientists this year when he was president of the Association for the Advancement of and asked them to explore the future of human-robot interactions. A report on their discussions is due next year.

For years, robots have been used outside the home. They detect bombs on the battleground, build cars in factories and deliver supplies and visit patients in hospitals.

But the past few years have seen the rise of home robots. Mainly they are used for tasks like vacuuming (think Roomba). There are also robotic lawn mowers, duct cleaners, surveillance systems and alarm clocks. There are robotic toys for entertainment, such as Furby. Robotic companions, like Paro the harbor seal, comfort the elderly. By 2015, personal robot sales in the U.S. will exceed $5 billion, more than quadrupling what they are now, according to ABI Research, which analyzes technology trends.

"You won't see Rosie from `The Jetsons,' but you're going to see more and more robots that help maintain your home. They'll pick up stuff off the floor, stock your fridge, carry stuff from the car," said Colin Angle, CEO of iRobot Corp., which makes the Roomba.

As such 'bots become more sophisticated, they could complicate questions about product liability. Ryan Calo, a fellow with Stanford's Center for Internet and Society, pointed out in a recent panel discussion at Stanford Law School that the original manufacturer might not always be liable if a robot went haywire.

"Robots are not just things the manufacturer builds and you go out and use them in a specific way. Robots can often be instructed, they can be programmed, you can have software that is built upon by others," he said.

There are no laws in the U.S. specifically governing robots, and discussion of them usually leads to science fiction writer Isaac Asimov's Three Laws of Robotics, which debuted in his 1942 short story "Runaround."

The first of Asimov's laws is that robots should do no harm. It's also one of the biggest considerations when manufacturing the next generation of personal robots.

"If a robot becomes increasingly autonomous and can make its own decisions, what happens if the robot does not carry out the exact wishes of the person?" said George Bekey, a robotics researcher and professor emeritus at University of Southern California.

As robots interact more closely with people, the bonds some people form with the machines - even ones that do not look like humans - might need to be considered.

Shoppers personalize their Roombas, naming and decorating them, for example. Angle recalled an incident when a soldier plucked a banged-up military robot nicknamed Scooby from an Iraqi battlefield and carried it to a depot to be fixed.

"It's doing you a service, you're going to get attached to it," Angle said.

Ronald Arkin teaches a course on robots and society at Georgia Tech and directs the school's Mobile Robot Laboratory. His most recent book is titled "Governing Lethal Behavior in Autonomous Robots."

"There needs to be ethics embedded in the systems," he said. "It's not just making a system that assists someone. It's making a system that interacts with someone in a way that respects their dignity."

Horvitz said his panel will recommend more research into the psychological reactions humans have to robotic systems. The group, he said, also suggests machines be designed with the ability to explain their reasoning to humans.

While ethicists, lawyers and roboticists ponder how to best integrate humans and autonomous machines, there is some evidence that a balance is already beginning to be struck.

After returning to the Stanford hospital on another occasion, Horvitz noticed a sign hanging above the spot where he had his harrowing experience. It read: "Please Do Not Board The Elevator With The ."

On the Net:


Stanford Law School Center for Internet and Society:

©2009 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Explore further

As robots become more common, Stanford experts consider the legal challenges

Citation: Scientists, lawyers mull effects of home robots (2009, December 5) retrieved 24 August 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Dec 05, 2009
"even simpler, benign robots will have legal, social and ethical consequences."

This is SO stupid but so TYPICAL OF decadent 21st century America and its legal priesthood. They apply the precautionary principle to EVERYTHING, which effectively says that we can have no new technology or ways of doing anything whatsoever because we can't know in advance what the likely outcomes are going to be.

I worked in China in the early 1990's. What Americans don't know is that consumer technology developed there is often not introduced on the American market simply because of the obstacles that are thrown in the way of anything new here. If I saw it happen once, I saw it happen half a dozen times. Wide screen televisions, for example, were available there in 1994. Ask yourself when you saw the first one here. Ditto, fuzzy logic food cookers and washing machines.

Americans need to wake up and realise that they've become a technological and cultural backwater.

Dec 06, 2009
Liability issues, imo, are probably the biggest reason we don't have cars that can drive themselves, at least on the highways, yet. You can't tell me the technology's not there, with cars that automatically correct oversteering, stop before you hit something, and so on. That challenge where an automatic vehicle travels autonomously over several hundred miles of off-road terrain has been won several times over, Now the challenge is about time to complete it. But the problem is, if there's an accident while the robot's driving, who's at fault, and who pays for the damages?

Dec 06, 2009
But the problem is, if there's an accident while the robot's driving, who's at fault, and who pays for the damages?

That's exactly the kind of thinking vanderMerwe alludes to. Instead of learning from mishaps and making things better (and giving the people who design stuff the benefit of the doubt that they really try to do as good a job as they know how to) the knee-jerk response is to seek out guilty parties and sue them into oblivion.

This is hamstringing engineering, because it puts those people out of business who have the most knowledge about the subject and are in the best position to improve on a product.

Engineering - especially with complex systems - is never error free the first time around. Surprises always crop up (I know, I'm an engineer myself)

Let's face it: Not one lawyer has ever MADE anything that makes your life better. Engineers do that.

Dec 06, 2009
that whole Asimov's laws things is silly. Would be hackable like anything else. While Im sure great efforts will be made to make all robots safe, Im sure somday we will hear about a robot assasin/murder machine that was a modification/homebuilt device that seeks out a certain person, the idea being to make it untraceable. That may sound paranoid etc, but it will no doubt eventually, rarely, happen. As robots become more common, they wont draw attention. There would be no way of scanning for a robots 'intentions'. Instructions could even be written to modify task and behavior to 'kill' only after reaching target, thereby appear totally normal until then. Not that I think such a thing would be cool, just being realistic. While I hope terrorism is less in the future, such a thing would natural choice for them. An extention of the cellphone detonator. My fear is not of robots themselves or their autonomy, its of programmers of destructive intent.

Dec 06, 2009
Think of it from the user end though. Imagine you get in your robotic car, tell it to take you to Grandma's house, and lean back to take a nap. Halfway there, there's an accident because someone else who's manually driving does something unexpected (though not illegal), your robot does the wrong thing, and you end up in the hospital and need a new car. That hospital and new car thing cost money. Who's supposed to pay for that? The other guy who didn't do anything wrong? You for not monitoring your robot the whole time? Or the engineer/manufacturer for not making the robot perfect? None of those options are practically, morally or economically satisfactory, and until a fair way to apportion liability is found, implementation of such technology will be limited. This isn't the lawyer's fault, it's basic human nature and our conceptions of responsibility that are at stake here.

Dec 06, 2009
Imagine you buy a hammer and drop it on your foot: Sue the hammer manufacturer, right?

Only buy stuff you understand - especially stuff of which you understand the limits. If you don't understand the limits of an autopiloted car then don't get in one (or just say "what the heck, I'll accept the risks"). But don't come screaming later on about "I didn't know it could/would do that!"

Take some friggin' responsibility in your life (or go buy insurance)

Dec 07, 2009
No fault insurance would take care of all of this and there could be a pool of money, collected by the manufacturer from sales to add to the ability to pay for damages.'s not impossible to come up with several schemes to deal with liability. It's not always a clear cut case of 'user error' so it's not just always a macho, 'take responsibility.' Suppose you swing a hammer and the head comes off and flies across a yard and puts out the eye of a neighbor child who is 9. Is there a shared responsibility? Was the hammer defective? Was the user negligent in maintenance and use? These are things that cannot be known ahead of time and thinking them through is not an idiotic idea but prudent. No where in this article did I see mention of the idea that 100% reliability was demanded before robots were sold.

Dec 07, 2009
Suppose you swing a hammer and the head comes off and flies across a yard and puts out the eye of a neighbor child who is 9

If the maufacturer failed to meet standards then he's got a problem. If no standrads are set then he does not.

But I bet autopilots will have certification procedures to go through. If all those standards are met then an unforeseen reaction to an unforeseen incident can't be the liability of the manufacturer.

Ex-post-facto laws are illegal.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more