New 'deep learning' technique enables robot mastery of skills via trial and error

May 21, 2015 by Sarah Yang
This team of UC Berkeley researchers has developed algorithms that enable their PR2 robot, nicknamed BRETT for Berkeley Robot for the Elimination of Tedious Tasks, to learn new tasks through trial and error. Shown, left to right, are Chelsea Finn, Pieter Abbeel, BRETT, Trevor Darrell and Sergey Levine. Credit: UC Berkeley Robot Learning Lab

UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of , by having a robot complete various tasks—putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more—without pre-programmed details about its surroundings.

"What we're reporting on here is a new approach to empowering a robot to learn," said Professor Pieter Abbeel of UC Berkeley's Department of Electrical Engineering and Computer Sciences. "The key is that when a robot is faced with something new, we won't have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it."

The latest developments will be presented on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation (ICRA). Abbeel is leading the project with fellow UC Berkeley faculty member Trevor Darrell, director of the Berkeley Vision and Learning Center. Other members of the research team are postdoctoral researcher Sergey Levine and Ph.D. student Chelsea Finn.

The work is part of a new People and Robots Initiative at UC's Center for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in , robotics and automation aligned to human needs.

"Most robotic applications are in controlled environments where objects are in predictable positions," said Darrell. "The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings."

Neural inspiration

Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

ideo showing BRETT, a PR2 robot, learning various motor tasks through trial and error. BRETT used the same “deep learning” algorithm to master all tasks. Credit: UC Berkeley Robot Learning Lab

Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

"For all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed," said Levine. "Instead, we learn new skills over the course of our life from experience and from other humans. This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own."

In the world of artificial intelligence, deep learning programs create "neural nets" in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels. This helps the robot recognize patterns and categories among the data it is receiving. People who use Siri on their iPhones, Google's speech-to-text program or Google Street View might already have benefited from the significant advances deep learning has provided in speech and vision recognition.

Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

"Moving about in an unstructured 3D environment is a whole different ballgame," said Finn. "There are no labeled directions, no examples of how to solve the problem in advance. There are no examples of the correct solution like one would have in speech and vision recognition programs."

Practice makes perfect

In the experiments, the UC Berkeley researchers worked with a Willow Garage Personal Robot 2 (PR2), which they nicknamed BRETT, or Berkeley Robot for the Elimination of Tedious Tasks.

New ‘deep learning’ technique enables robot mastery of skills via trial and error
BRETT is shown here learning how to screw a cap onto a water bottle. Credit: UC Berkeley Robot Learning Lab

They presented BRETT with a series of , such as placing blocks into matching openings or stacking Lego blocks. The algorithm controlling BRETT's learning included a reward function that provided a score based upon how well the robot was doing with the task.

BRETT takes in the scene, including the position of its own arms and hands, as viewed by the camera. The algorithm provides real-time feedback via the score based upon the robot's movements. Movements that bring the robot closer to completing the task will score higher than those that do not. The score feeds back through the neural net, so the robot can learn which movements are better for the task at hand.

This end-to-end training process underlies the robot's ability to learn on its own. As the PR2 moves its joints and manipulates objects, the algorithm calculates good values for the 92,000 parameters of the neural net it needs to learn.

With this approach, when given the relevant coordinates for the beginning and end of the task, the PR2 could master a typical assignment in about 10 minutes. When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

Abbeel says the field will likely see significant improvements as the ability to process vast amounts of data improves.

"With more data, you can start learning more complex things," he said. "We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in learning capabilities through this line of work."

Explore further: Standard knowledge for robots

Related Stories

Standard knowledge for robots

May 20, 2015

What do you know? There is now a world standard for capturing and conveying the knowledge that robots possess—or, to get philosophical about it, an ontology for automatons.

'Robobarista' can figure out your new coffee machine

April 15, 2015

In the near future we may have household robots to handle cooking, cleaning and other menial tasks. They will be teachable: Show the robot how to operate your coffee machine, and it will take over from there.

Evolving robot brains

March 2, 2015

Researchers are using the principles of Darwinian evolution to develop robot brains that can navigate mazes, identify and catch falling objects, and work as a group to determine in which order they should exit and re-enter ...

Robots do kitchen duty with cooking video dataset

January 5, 2015

Now that we have robots that walk, gesture and talk, roboticists are interested in a next level: How can they learn more than they already know? The ability of these machines to learn actions from human demonstrations is ...

Clever copters can learn as they fly (w/ Video)

June 26, 2014

(Phys.org) —The research paves the way for robots to work intelligently alongside humans in ways that are currently familiar only through science fiction films. The robots could play important roles in crisis situations ...

Recommended for you

When words, structured data are placed on single canvas

October 22, 2017

If "ugh" is your favorite word to describe entering, amending and correcting data on the rows and columns on spreadsheets you are not alone. Coda, a new name in the document business, feels it's time for a change. This is ...

Enhancing solar power with diatoms

October 20, 2017

Diatoms, a kind of algae that reproduces prodigiously, have been called "the jewels of the sea" for their ability to manipulate light. Now, researchers hope to harness that property to boost solar technology.

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

Scottingham
not rated yet May 21, 2015
I want my robobutler now dammit!

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.