Laundry duty getting you down? Robots to the rescue!

Jun 30, 2011

Folding a towel or a T-shirt is kind of a mindless, simple chore, unless you’re a robot. Then it’s still mindless, but not so simple.

Commercial robotic devices can manipulate identically shaped objects — flawlessly fitting together parts in a car assembly line, for example. But they can’t deal with novelty. 

A more useful — and ambitious — robot could encounter objects with flexible shapes, yet still determine what it’s dealing with. Such a robot could take on an array of disarray: It could pick up each article from a pile of towels and clothes, figure out its shape and fold it.

Pieter Abbeel, an assistant professor of electrical engineering and computer sciences at UC Berkeley, and his students have now provided a human-sized robot with these skills — part of Abbeel’s long-term effort to greatly expand the robotic repertoire.

Robot programmers create thousands of computer instructions, called lines of code, to get their metal servants to perform correctly. Abbeel and his students developed programs that enable their robot to eliminate one possibility after another until it reaches a single inescapable conclusion: the exact shape of the cloth object it’s holding. Then it can finally get down to the business of folding.  

The lab first tried programming the robot to recognize the geometry of the piece of clothing when it is holding it up. They mounted two high-resolution cameras on the robot – its “eyes” — to produce images in which the micro-texture of the towel could be observed. 

This video is not supported by your browser at this time.

For each pixel the robot imaged, the program directed it to find the corresponding spot in a second image taken from a different viewpoint. This allowed the robot to map out the towel’s 3-D configuration. With that data, it could figure out where the mystery object’s corners were — the first step in starting to manipulate it.

They succeeded, but both the programmers and the robot had to work too hard.

“It was very hard computationally,” Abbeel says. “Matching all pixels across two images would take maybe two to three seconds, but you need to look at many different viewpoints, so it would take maybe five minutes before it could identify a corner and then it would run through whole the process all over again to find a second corner.”

Abbeel figured there must be a better way. His team developed an approach that allowed the robot to figure out what article it is holding, and where it is holding it, using much simpler and faster visual processing. 

Rather than mapping out the article’s entire 3-D configuration, the new strategy requires the robot to extract only two pieces of information from the images: the lowest point on the article when it’s being held up by one gripper, and the outline of the article in the image when it’s being helped up by two grippers. 

“Since we also provide the robot with an internal model of how cloth will move or hang when being held up, it can figure out what it’s holding with just these two pieces of information.”

The robot starts out with a very large number of hypotheses — one for each possible clothing article and each possible grasp point on that article. Then it grasps and re-grasps the article hundreds of times, holding it up and taking its image each time. As it repeats this process, the number of hypotheses consistent with the observed heights and contours quickly shrinks, and at some point it reaches a conclusion, like “Now I know I’m holding article type C and grasping it at points 36 and 75.” A witness can’t see a “Eureka moment,” but eventually, the metal homemaker switches to folding mode. 

Trying to get a robot to take on kitchen chores is fascinating in and of itself, Abbeel says, but he’s also carrying out the research to learn how to build intelligent systems that can perform far more complex jobs. He is in the very early stages of conceiving a surgical robot that could take on routine tasks for a surgeon, such as tying a knot, and allowing the expert to focus on more critical aspects of the surgery.

He is collaborating with heart surgeon Douglas Boyd at UC Davis to identify the most useful contributions a robotic device could make in the surgical setting. Abbeel, Boyd and two other UC faculty scientists have presented a proposal to UC’s Center for Information Technology Research in the Interest of Society (CITRIS) for a proof-of-concept project to develop robot-assisted telesurgery, enabling a surgeon to direct a robotic surgical device remotely. Telesurgery might be used to perform fairly routine but urgent procedures when a surgeon can't get to the hospital in time.

CITRIS is one of UC’s four California Institutes for Science and Innovation, conceived to encourage collaborations between UC researchers in different disciplines and different campuses, and between UC scientists and industry.

Abbeel credits CITRIS with launching his early-stage collaboration with cardiosurgeon Boyd. “We met at a CITRIS health care workshop that brought together scientists with different interests and skills. We decided to work together so we could develop applications that are useful in the most critical surgical areas.”

So, will robots eventually take away our jobs and leave us all listless?

“Well, of course, they’ve already replaced some assembly-line jobs, but I think people will still be doing 90 percent of what they are already doing — for work and after work,” said Abbeel. “Will people lie on the beach all day if a is doing their house chores? Who knows? Maybe they’ll have time to do more of the things they want, like gardening or cooking, or cycling, or maybe developing new kinds of robots.”

Explore further: Human or robot? Hit Swedish TV series explores shrinking divide

add to favorites email to friend print save as pdf

Related Stories

When robots learn from our mistakes

May 26, 2011

(PhysOrg.com) -- Robots typically acquire new capacities by imitation. Now, EPFL scientists are doing the inverse -- developing machines that can learn more rapidly and outperform humans by starting from failed ...

High-precision robots available in kit form

Jun 17, 2011

(PhysOrg.com) -- A doctoral student from EPFL's Laboratory of Robotics Systems has developed a concept for modular industrial robots, based on the technology of parallel robots, whose precision is expressed ...

Researchers develop a robot that folds towels (w/ Video)

Apr 05, 2010

(PhysOrg.com) -- A team from Berkeley's Electrical Engineering and Computer Sciences department has figured out how to get a robot to fold previously unseen towels of different sizes. Their approach solves a key problem in ...

Robot, object, action!

Oct 29, 2010

Robotic demonstrators developed by European researchers produce compelling evidence that ‘thinking-by-doing’ is the machine cognition paradigm of the future. Robots act on objects and teach themselves ...

ARMAR-III, the robot that learns via touch (w/ Video)

Nov 17, 2010

(PhysOrg.com) -- Researchers in Europe have created a robot that uses its body to learn how to think. It is able to learn how to interact with objects by touching them without needing to rely on a massive ...

Recommended for you

Getting a grip on robotic grasp

Jul 18, 2014

Twisting a screwdriver, removing a bottle cap, and peeling a banana are just a few simple tasks that are tricky to pull off single-handedly. Now a new wrist-mounted robot can provide a helping hand—or rather, ...

JIBO robot could become part of the family

Jul 17, 2014

JIBO, measuring at about 11 inches tall and weighing approximately 6 pounds, is a robotic device designed for people to use as a companion and helper at home. , The team behind JIBO aims to bring it to market ...

User comments : 0