Cooperative robots that learn = less work for human handlers (w/ video)

Cooperative robots that learn = less work for human handlers
Clifford I. Nass of Stanford University and Robin Murphy of Texas A&M University are exploring ways to make rescue robots more user-friendly by incorporating lessons learned from studies of how humans interact with technology. Rescue robots serve as a trapped disaster victim's lifeline to the outside world. But they are worthless if the victim finds them scary, bossy, out-of-control or just plain creepy. Credit: Texas A&M University

(PhysOrg.com) -- Learning a language can be difficult for some, but for babies it seems quite easy. With support from the National Science Foundation (NSF), linguist Jeffrey Heinz and mechanical engineer Bert Tanner have taken some cues from the way humans learn to put words and thoughts together, and are teaching language to robots.

This innovative collaboration began a few years ago at a meeting at the University of Delaware. The event, organized by the dean's office, brought recently hired faculty in arts and sciences, and engineering together; each gave a one-minute slide presentation about his or her research.

"That's how we became aware of what each other was doing," says Tanner. "We started discussing ideas about how we could collaborate, and the NSF project came as a result of that. Once we started seeing things aligning with each other and clicking together, we thought, 'Oh, maybe we really have something here that the world needs to know about.'"

One goal for this project is to design cooperative robots that can operate autonomously and communicate with each other in dangerous situations, such as in a fire or at a disaster site.

Robots at a building fire, for instance, could assess the situation and take the most appropriate action without a direct command from a human.

"We would like to make the robots adaptive--learn about their environment and reconfigure themselves based on the knowledge they acquire," explains Tanner.

He hopes one robot could follow another robot, watch what it was doing and infer it should be doing the same thing.

"The robots will be designed to do different tasks," says Heinz. "We have eyes that see, ears that hear and we have fingers that touch. We don’t have a 'universal sensory organ.' Likewise in the robotics world, we're not going to design a universal robot that's going to be able to do anything and everything."

Each robot will play to its talents and strengths. The robots need to be aware of their own capabilities, those of the other robots around them and the overall goal of their mission.

"If the two robots are working together, then you can have one [ground robot] that can open the door and one [flying robot] that can fly through it," says Heinz. "In that sense, those two robots working together can do more than if they were working independently."

The researchers base their strategy for robot language on the structure of human languages.

Words and sentences in every language have complex rules and structures--do's and don'ts for what letters and sounds can be put in what order.

"So a robot's "sentence" in a sense, is just a sequence of actions that it is conducting," says Heinz. "And there will be constraints on the kinds of sequences of actions that a robot can do."

"For example," Heinz says, "In Latin, you can have Ls and Rs in words, but for the most part, you can't have two non-adjacent Ls unless you have an R in between them."

A robot's language follows a similar pattern. If a robot can do three things--move, grasp an object in its claw, and release that object, it can only do those things in a certain order. It cannot grasp an object twice in a row.

"If the robot has something in its claw, it cannot possibly hold another thing at the same time," says Tanner. "It has to lay it down before it picks up something else. We are trying to teach a these kinds of constraints, and this is where some of the techniques we find in linguistics come into play."

Heinz and Tanner want the robots to be able to answer questions about events happening around them. "Is the fire going to spread this way or that way? Is some unknown agent going to move this way or that way? These kinds of things," says Heinz.

And in designing solutions to the challenges in disaster situations, timing is crucial.

"There's always a tradeoff in how much you can compute, and how many solutions you can afford to lose," says Tanner. "For us, the priority is on being able to compute reasonable solutions quickly, rather than searching the whole possible space of solutions."

Heinz's and Tanner's research is helping to prove that two heads--or in this case two circuit boards--are better than one.

Citation: Cooperative robots that learn = less work for human handlers (w/ video) (2011, June 28) retrieved 29 March 2024 from https://phys.org/news/2011-06-cooperative-robots-human-handlers-video.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

The Next Level in Robots: Monkey See, Monkey Do, Monkey Create

0 shares

Feedback to editors