'Hallucinating' robots arrange objects for human use

Jun 18, 2012 By Bill Steele
A robot populates a room with imaginary human stick figures in order to decide where objects should go to suit the needs of humans.

(Phys.org) -- If you hire a robot to help you move into your new apartment, you won't have to send out for pizza. But you will have to give the robot a system for figuring out where things go. The best approach, according to Cornell researchers, is to ask "How will humans use this?"

Researchers in the Lab of Ashutosh Saxena, assistant professor of , have already taught robots to identify common objects, pick them up and place them stably in appropriate locations. Now they've added the human element by teaching robots to "hallucinate" where and how humans might stand, sit or work in a room, and place objects in their usual relationship to those imaginary people.

Their work will be reported at the International Symposium on Experimental Robotics, June 21 in Quebec, and the International Conference of Machine Learning, June 29 in Edinburgh, Scotland.

Previous work on robotic placement, the researchers note, has relied on modeling relationships between objects. A keyboard goes in front of a monitor, and a mouse goes next to the keyboard. But that doesn't help if the robot puts the monitor, and mouse at the back of the desk, facing the wall.

'Hallucinating' robots arrange objects for human use
Above left, random placing of objects in a scene puts food on the floor, shoes on the desk and a laptop teetering on the top of the fridge. Considering the relationships between objects (upper right) is better, but he laptop is facing away from a potential user and the food higher than most humans would like. Adding human context (lower left) makes things more accessible. Lower right: how an actual robot carried it out. (Personal Robotics Lab)

Relating objects to humans not only avoids such mistakes but also makes computation easier, the researchers said, because each object is described in terms of its relationship to a small set of human poses, rather than to the long list of other objects in a scene. A computer learns these relationships by observing 3-D images of rooms with objects in them, in which it imagines human figures, placing them in practical relationships with objects and furniture. You don't don't put a sitting person where there is no chair. You can put a sitting person on top of a bookcase, but there are no objects there for the person to use, so that''s ignored. It The computer calculates the distance of objects from various parts of the imagined human figures, and notes the orientation of the objects.

Eventually it learns commonalities: There are lots of imaginary people sitting on the sofa facing the TV, and the TV is always facing them. The remote is usually near a human's reaching arm, seldom near a standing person's feet. "It is more important for a robot to figure out how an object is to be used by humans, rather than what the object is. One key achievement in this work is using unlabeled data to figure out how humans use a space," Saxena said.

In a new situation the a robot places human figures in a 3-D image of a room, locating them in relation to objects and furniture already there. "It puts a sample of human poses in the environment, then figures out which ones are relevant and ignores the others," Saxena explained. It decides where new objects should be placed in relation to the human figures, and carries out the action.

The researchers tested their method using images of living rooms, kitchens and offices from the Google 3-D Warehouse, and later, images of local offices and apartments. Finally, they programmed a to carry out the predicted placements in local settings. Volunteers who were not associated with the project rated the placement of each object for correctness on a scale of 1 to 5.

Comparing various algorithms, the researchers found that placements based on human context were more accurate than those based solely in relationships between objects, but the best results of all came from combining human context with object-to-object relationships, with an average score of 4.3. Some tests were done in rooms with furniture and some objects, others in rooms where only a major piece of furniture was present. The object-only method performed significantly worse in the latter case because there was no context to use. "The difference between previous works and our [human to ] method was significantly higher in the case of empty rooms," Saxena reported.

The research was supported by a Microsoft Faculty Fellowship and a gift from Google. Marcus Lin, M.Eng. '12, received an Academic Excellence Award from the Department of Computer Science in part for his work on this project.

Explore further: Co-robots team up with humans

Related Stories

Robots learn to handle objects, understand places

Sep 02, 2011

(PhysOrg.com) -- Infants spend their first few months learning to find their way around and manipulating objects, and they are very flexible about it: Cups can come in different shapes and sizes, but they ...

Robots learn to pick up oddly shaped objects

May 09, 2012

(Phys.org) -- When Cornell engineers developed a new type of robot hand that could pick up oddly shaped objects it presented a challenge: It was easy for a human operator to choose the best place to take h ...

Robots could improve everyday life, do chores

Sep 21, 2010

(PhysOrg.com) -- They're mundane, yet daunting tasks: Tidying a messy room. Assembling a bookshelf from a kit of parts. Fetching a hairbrush for someone who can't do it herself. What if a robot could do it ...

Teaching robots to identify human activities

Jul 19, 2011

(PhysOrg.com) -- If we someday live in "smart houses" or have personal robots to help around the home and office, they will need to be aware of what humans are doing. You don't remind grandpa to take his arthritis ...

ARMAR-III, the robot that learns via touch (w/ Video)

Nov 17, 2010

(PhysOrg.com) -- Researchers in Europe have created a robot that uses its body to learn how to think. It is able to learn how to interact with objects by touching them without needing to rely on a massive ...

Recommended for you

Firmer footing for robots with smart walking sticks

Nov 25, 2014

Anyone who has ever watched a humanoid robot move around in the real world—an "unstructured environment," in research parlance—knows how hard it is for a machine to plan complex movements, balance on ...

Knightscope K5 on security patrol roams campus

Nov 24, 2014

A Mountain View, California-based company called Knightscope designs and builds 5-feet, 300-pound security guards called K5, but anyone scanning last week's headlines has already heard about them, with the ...

Robots take over inspection of ballast tanks on ships

Nov 24, 2014

A new robot for inspecting ballast water tanks on board ships is being developed by a Dutch-German partnership including the University of Twente. The robot is able to move independently along rails built ...

User comments : 2

Adjust slider to filter visible comments by rank

Display comments: newest first

Bascule
not rated yet Jun 18, 2012
'Hallucinating'? I think the word you're looking for is 'Imagine' or possibly 'Visualising'.

I often imagine how something will work when I'm arranging furniture and I rarely consider myself to be hallucinating.
roldor
1 / 5 (1) Jun 19, 2012
You are also hallucinating, when you think to move into a bigger house.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.