The human touch makes robots defter

November 7, 2013 by Bill Steele
The human touch makes robots defter
Graduate students Ian Lenz, left, and Ashesh Jain with the many-jointed Baxter model robot from Rethink Robotics. In Ashutosh Saxena's Personal Robotics Lab the robot is called "Yogi" because all their robots are named for bears, in honor of Cornell's mascot.

Cornell engineers are helping humans and robots work together to find the best way to do a job, an approach called "coactive learning."

"We give the a lot of flexibility in learning," said Ashutosh Saxena, assistant professor of computer science. "We build on our previous work in teaching robots to plan their actions, then the user can give corrective feedback."

Saxena's research team will report their work at the Neural Information Processing Systems conference in Lake Tahoe, Calif., Dec. 5-8.

Modern industrial robots, like those on automobile assembly lines, have no brains, just memory. An operator programs the robot to move through the desired action; the robot can then repeat the exact same action every time a car goes by.

But off the assembly line, things get complicated: A working in a home has to handle tomatoes more gently than canned goods. If it needs to pick up and use a sharp kitchen knife, it should be smart enough to keep the blade away from humans.

Saxena's team, led by Ph.D. student Ashesh Jain, set out to teach a robot to work on a supermarket checkout line, modifying a Baxter robot from Rethink Robotics in Boston, designed for work. It can be programmed by moving its arms through an action, but also offers a mode where a human can make adjustments while anaxctiinis in progress.

The human touch makes robots defter
With multiple joints, a Baxter robot can move more flexibly than a human, but it would be hard for a human to decide how best to use those arms, so the robot is programmed to plan its own movements, then allow humans to make corrections.

The Baxter's arms have two elbows and a rotating wrist, so it's not always obvious to a human operator how best to move the arms to accomplish a particular task. So the researchers, drawing on previous work, added programming that lets the robot plan its own motions. It displays three possible trajectories on a touch screen where the operator can select the one that looks best.

Then humans can give corrective feedback. As the robot executes its movements, the operator can intervene, guiding the arms to fine-tune the trajectory. The robot has what the researechers call a "zero-G" mode, where the robot's arms hold their position against gravity but allow the operator to move them. The first correction may not be the best one, but it may be slightly better. The learning algorithm the researchers provided allows the robot to learn incrementally, refining its trajectory a little more each time the human operator makes adjustments. Even with weak but incrementally correct feedback from the user, the robot arrives at an optimal movement.

The robot learns to associate a particular trajectory with each type of object. A quick flip over might be the fastest way to move a cereal box, but that wouldn't work with a carton of eggs. Also, since eggs are fragile, the robot is taught that they shouldn't be lifted far above the counter. Likewise, the robot learns that sharp objects shouldn't be moved in a wide swing; they are held in close, away from people.

The human touch makes robots defter
As a first step, the robot computes three possible trajectories for moving an object and displays them on a touch screen. After an operator selects one, the root goes through the motions and the operator can make refinements.

In tests with users who were not part of the research team, most users were able to train the robot successfully on a particular task with just five corrective feedbacks. The robots also were able to generalize what they learned, adjusting when the object, the environment or both were changed.

Explore further: Knife-wielding robot trains for grocery checkout job using new coactive learning technique (w/ Video)

Related Stories

Amber 2 robot walks with a human gait (w/ Video)

October 25, 2013

(Phys.org) —The engineering team at Texas A&M's Amber robotics labs has been hard at work trying to improve one area of robotics that others seem to be ignoring—getting a robot to mimic the natural gait of a human being. ...

Recommended for you

A not-quite-random walk demystifies the algorithm

December 15, 2017

The algorithm is having a cultural moment. Originally a math and computer science term, algorithms are now used to account for everything from military drone strikes and financial market forecasts to Google search results.

US faces moment of truth on 'net neutrality'

December 14, 2017

The acrimonious battle over "net neutrality" in America comes to a head Thursday with a US agency set to vote to roll back rules enacted two years earlier aimed at preventing a "two-speed" internet.

FCC votes along party lines to end 'net neutrality' (Update)

December 14, 2017

The Federal Communications Commission repealed the Obama-era "net neutrality" rules Thursday, giving internet service providers like Verizon, Comcast and AT&T a free hand to slow or block websites and apps as they see fit ...

The wet road to fast and stable batteries

December 14, 2017

An international team of scientists—including several researchers from the U.S. Department of Energy's (DOE) Argonne National Laboratory—has discovered an anode battery material with superfast charging and stable operation ...

0 comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.