Robots learn to pick up oddly shaped objects

May 09, 2012 By Bill Steele
Robots learn to pick up oddly shaped objects
The universal jamming gripper picking up a pair of tongs. The trick is to decide where to grab.

(Phys.org) -- When Cornell engineers developed a new type of robot hand that could pick up oddly shaped objects it presented a challenge: It was easy for a human operator to choose the best place to take hold of an object, but an autonomous robot, like the ones we may someday have helping around the home or office, would need a new kind of programming. So they have developed a procedure -- an algorithm -- that allows a robot to learn grasping skills from experience and apply them in new situations.

Although inspired by the "universal jamming gripper" created in the lab of Hod Lipson, associate professor of and , the new method is "hardware agnostic," the researchers said, and will work with any type of gripper.

The work by Lipson and Ashutosh Saxena, assistant professor of computer science, a specialist in "machine learning," will be presented May 16 at the International Conference on Robotics and Automation in St. Paul, Minn. Co-authors of their paper are graduate students Yun Jiang and John Amend.

Robots learn to pick up oddly shaped objects
The new grasping algorithm was tested with a variety of objects.

Lipson's gripper consists of a flexible bag filled with a . As the bag settles on an it deforms to fit, then air is sucked out of the bag, causing the to pull together and tighten the grip. Previous grasping algorithms have been based on 3-D models of the object and the robot's gripping mechanism. A robot's computer brain creates an image of how its hand will look when attached to, say, a cup handle or pencil, and computes the needed to arrive in that position. But modeling how a deformable bag shapes around irregular objects is too hard to compute, so the researchers adopted a learning approach.

In a 3-D image of the object, the robot examines a series of rectangles that match the size of the gripper and tests each one on a variety of features. The robot is trained on images of many different objects until it has built a library of features common to good grasping rectangles. Presented with a new object, it chooses the rectangle with the highest score based on the rules it has discovered. For example, if a rectangle is divided into three strips and the center strip is higher than the other two, that might be a good place to grab.

The robot also considers the overall size and shape of the object to choose a stable grasping point. You don't want to pick up a heavy, irregular object by one end; but arbitrarily choosing the center may not work either. The "center" of a computer mouse is halfway along the cord.

To test the method the researchers fitted an industrial robot arm with the jamming gripper and a Microsoft Kinect 3-D camera. In trying to pick up 23 objects, including tools, toys and dishes, the robot succeeded an average of 90 percent to 100 percent of the time, depending on the type of object. In most cases, the robot was able to successfully grasp new objects that had not been in the training set. Deformable objects like a shoe or a purse were harder, with the robot averaging only 67 percent success.

They ran the same tests with a simple "pick it up at the center" directive, scoring only 30 percent to 50 percent, except on flat objects, where both approaches tied at 89 percent.

The algorithm also was tested with the standard parallel jaws most modern robots use, with about the same results, except that the jaws were 100 percent successful in picking up soft, deformable objects. A future robot may need interchangeable hands for different jobs. An extension of their work, the researchers said, might be to include a gripper selection feature.

Explore further: An android opera: Japan's Shibuya plots new era of robot music

Related Stories

Robots could improve everyday life, do chores

Sep 21, 2010

(PhysOrg.com) -- They're mundane, yet daunting tasks: Tidying a messy room. Assembling a bookshelf from a kit of parts. Fetching a hairbrush for someone who can't do it herself. What if a robot could do it ...

Robots learn to handle objects, understand places

Sep 02, 2011

(PhysOrg.com) -- Infants spend their first few months learning to find their way around and manipulating objects, and they are very flexible about it: Cups can come in different shapes and sizes, but they ...

Robot fetches objects with just a point and a click

Mar 19, 2008

Robots are fluent in their native language of 1 and 0 absolutes but struggle to grasp the nuances and imprecise nature of human language. While scientists are making slow, incremental progress in their quest ...

Care-O-bot 3: Always at your service

Jul 01, 2008

Who doesn’t long for household help at times? Service robots will soon be able to relieve us of heavy, dirty, monotonous or irksome tasks. Research scientists have now presented a new generation of household ...

Recommended for you

Robots recognize humans in disaster environments

9 hours ago

Through a computational algorithm, a team of researchers from the University of Guadalajara (UDG) in Mexico, developed a neural network that allows a small robot to detect different patterns, such as images, ...

Japan toymaker unveils tiny talking, singing humanoid

Oct 15, 2014

Japanese toymaker Tomy on Wednesday unveiled a multi-talented humanoid robot, named "Robi jr.," which can converse using some 1,000 phrases and belt out about 50 songs, as well as move its limbs and head.

Can we teach robots right from wrong?

Oct 14, 2014

From performing surgery and flying planes to babysitting kids and driving cars, today's robots can do it all. With chatbots such as Eugene Goostman recently being hailed as "passing" the Turing test, it appears robots are ...

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

Eoprime
5 / 5 (3) May 09, 2012
really old news...
Scottingham
not rated yet May 09, 2012
Eoprime beat me to it...this is incredibly old news. I think the only difference is now the AI can better plan how to use its gripper to properly pick up an object after some analysis.