Cooperative robots that learn = less work for human handlers (w/ video)

June 28, 2011 By Miles O'Brien and Marsha Walton
Clifford I. Nass of Stanford University and Robin Murphy of Texas A&M University are exploring ways to make rescue robots more user-friendly by incorporating lessons learned from studies of how humans interact with technology. Rescue robots serve as a trapped disaster victim's lifeline to the outside world. But they are worthless if the victim finds them scary, bossy, out-of-control or just plain creepy. Credit: Texas A&M University

( -- Learning a language can be difficult for some, but for babies it seems quite easy. With support from the National Science Foundation (NSF), linguist Jeffrey Heinz and mechanical engineer Bert Tanner have taken some cues from the way humans learn to put words and thoughts together, and are teaching language to robots.

This innovative collaboration began a few years ago at a meeting at the University of Delaware. The event, organized by the dean's office, brought recently hired faculty in arts and sciences, and engineering together; each gave a one-minute slide presentation about his or her research.

"That's how we became aware of what each other was doing," says Tanner. "We started discussing ideas about how we could collaborate, and the NSF project came as a result of that. Once we started seeing things aligning with each other and clicking together, we thought, 'Oh, maybe we really have something here that the world needs to know about.'"

The video will load shortly

One goal for this project is to design cooperative robots that can operate autonomously and communicate with each other in dangerous situations, such as in a fire or at a disaster site.

Robots at a building fire, for instance, could assess the situation and take the most appropriate action without a direct command from a human.

"We would like to make the robots adaptive--learn about their environment and reconfigure themselves based on the knowledge they acquire," explains Tanner.

He hopes one robot could follow another robot, watch what it was doing and infer it should be doing the same thing.

"The robots will be designed to do different tasks," says Heinz. "We have eyes that see, ears that hear and we have fingers that touch. We don’t have a 'universal sensory organ.' Likewise in the robotics world, we're not going to design a universal robot that's going to be able to do anything and everything."

Each robot will play to its talents and strengths. The robots need to be aware of their own capabilities, those of the other robots around them and the overall goal of their mission.

"If the two robots are working together, then you can have one [ground robot] that can open the door and one [flying robot] that can fly through it," says Heinz. "In that sense, those two robots working together can do more than if they were working independently."

The researchers base their strategy for robot language on the structure of human languages.

Words and sentences in every language have complex rules and structures--do's and don'ts for what letters and sounds can be put in what order.

"So a robot's "sentence" in a sense, is just a sequence of actions that it is conducting," says Heinz. "And there will be constraints on the kinds of sequences of actions that a robot can do."

"For example," Heinz says, "In Latin, you can have Ls and Rs in words, but for the most part, you can't have two non-adjacent Ls unless you have an R in between them."

A robot's language follows a similar pattern. If a robot can do three things--move, grasp an object in its claw, and release that object, it can only do those things in a certain order. It cannot grasp an object twice in a row.

"If the robot has something in its claw, it cannot possibly hold another thing at the same time," says Tanner. "It has to lay it down before it picks up something else. We are trying to teach a these kinds of constraints, and this is where some of the techniques we find in linguistics come into play."

Heinz and Tanner want the robots to be able to answer questions about events happening around them. "Is the fire going to spread this way or that way? Is some unknown agent going to move this way or that way? These kinds of things," says Heinz.

And in designing solutions to the challenges in disaster situations, timing is crucial.

"There's always a tradeoff in how much you can compute, and how many solutions you can afford to lose," says Tanner. "For us, the priority is on being able to compute reasonable solutions quickly, rather than searching the whole possible space of solutions."

Heinz's and Tanner's research is helping to prove that two heads--or in this case two circuit boards--are better than one.

Explore further: The Next Level in Robots: Monkey See, Monkey Do, Monkey Create

Related Stories

Thanks to RoboEarth the bots can learn on their own

February 10, 2011

( -- We tend to assume that robots need human input in order to understand the world around them. In the near future humans may not even be a part of the robotic-learning equation. Soon, robots will be able to ...

iRobot planning an Android-based robot

May 12, 2011

( -- iRobot is working on robots that have the brains of an Android tablet. The goal is an Android-based tablet that is able to see the world around it, hear input from humans, respond and think about the next ...

Robots learn to create language

May 17, 2011

( -- Communication is a vital part of any task that has to be done by more than one individual. That is why humans in every corner of the world have created their own complex languages that help us share the goal. ...

Kilobots bring us one step closer to a robot swarm

June 17, 2011

( -- When you think about robots, the odds are that you think about something that is fairly large. Maybe you picture a robot arms bolted to the floor of a factory or if you are feeling particularly dramatic maybe ...

Recommended for you

'Droneboarding' takes off in Latvia

January 22, 2017

Skirted on all sides by snow-clad pine forests, Latvia's remote Lake Ninieris would be the perfect picture of winter tranquility—were it not for the huge drone buzzing like a swarm of angry bees as it zooms above the solid ...

Singapore 2G switchoff highlights digital divide

January 22, 2017

When Singapore pulls the plug on its 2G mobile phone network this year, thousands of people could be stuck without a signal—digital have-nots left behind by the relentless march of technology.

Making AI systems that see the world as humans do

January 19, 2017

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand ...


Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.