Robots using tools: With new grant, researchers aim to create 'MacGyver' robot

Oct 09, 2012
Georgia Tech assistant professor Mike Stilman poses with Golem Krang, a humanoid robot designed and built in Stilman’s laboratory to study whole-body robotic planning and control. Stilman's research aims to give robots autonomous capabilities to perform rescues using tools found in the environment. Credit: Josh Meister

Robots are increasingly being used in place of humans to explore hazardous and difficult-to-access environments, but they aren't yet able to interact with their environments as well as humans. If today's most sophisticated robot was trapped in a burning room by a jammed door, it would probably not know how to locate and use objects in the room to climb over any debris, pry open the door, and escape the building.

A research team led by Professor Mike Stilman at the Georgia Institute of Technology hopes to change that by giving robots the ability to use objects in their environments to accomplish high-level tasks. The team recently received a three-year, $900,000 grant from the to work on this project.

"Our goal is to develop a that behaves like MacGyver, the television character from the 1980s who solved complex problems and escaped by using everyday objects and materials he found at hand," said Stilman, an assistant professor in the School of at Georgia Tech. "We want to understand the basic that allow humans to take advantage of arbitrary objects in their environments as tools. We will achieve this by designing algorithms for robots that make tasks that are impossible for a robot alone possible for a robot with tools."

The research will build on Stilman's previous work on navigation among movable obstacles that enabled robots to autonomously recognize and move obstacles that were in the way of their getting from point A to point B.

"This project is challenging because there is a critical difference between out of the way and using objects to make a way," explained Stilman. "Researchers in the robot motion planning field have traditionally used computerized vision systems to locate objects in a cluttered environment to plan collision-free paths, but these systems have not provided any information about the objects' functions."

To create a robot capable of using objects in its environment to accomplish a task, Stilman plans to develop an algorithm that will allow a robot to identify an arbitrary object in a room, determine the object's potential function, and turn that object into a simple machine that can be used to complete an action. Actions could include using a chair to reach something high, bracing a ladder against a bookshelf, stacking boxes to climb over something, and building levers or bridges from random debris.

By providing the robot with basic knowledge of rigid body mechanics and simple machines, the robot should be able to autonomously determine the mechanical force properties of an object and construct motion plans for using the object to perform high-level tasks.

For example, exiting a burning room with a jammed door would require a robot to travel around any fire, use an object in the room to apply sufficient force to open the stuck door, and locate an object in the room that will support its weight while it moves to get out of the room.

Such skills could be extremely valuable in the future as robots work side-by-side with military personnel to accomplish challenging missions.

"The Navy prides itself on recruiting, training and deploying our country's most resourceful and intelligent men and women," said Paul Bello, director of the cognitive science program in the Office of Naval Research (ONR). "Now that robotic systems are becoming more pervasive as teammates for warfighters in military operations, we must ensure that they are both intelligent and resourceful. Professor Stilman's work on the 'MacGyver-bot' is the first of its kind, and is already beginning to deliver on the promise of mechanical teammates able to creatively perform in high-stakes situations."

To address the complexity of the human-like reasoning required for this type of scenario, Stilman is collaborating with researchers Pat Langley and Dongkyu Choi. Langley is the director of the Institute for the Study of Learning and Expertise (ISLE), and is recognized as a co-founder of the field of machine learning, where he championed both experimental studies of learning algorithms and their application to real-world problems. Choi is an assistant professor in the Department of Aerospace Engineering at the University of Kansas.

Langley and Choi will expand the cognitive architecture they developed, called ICARUS, which provides an infrastructure for modeling various human capabilities like perception, inference, performance and learning in robots.

"We believe a hybrid reasoning system that embeds our physics-based algorithms within a cognitive architecture will create a more general, efficient and structured control system for our robot that will accrue more benefits than if we used one approach alone," said Stilman.

After the researchers develop and optimize the hybrid reasoning system using computer simulations, they plan to test the software using Golem Krang, a humanoid robot designed and built in Stilman's laboratory to study whole-body robotic planning and control.

This research is sponsored by the Department of the Navy, Office of Naval Research, through grant number N00014-12-1-0143. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Office of Naval Research.

Explore further: Lockheed Martin conducts first fully autonomous robot mission

add to favorites email to friend print save as pdf

Related Stories

Robots could improve everyday life, do chores

Sep 21, 2010

(PhysOrg.com) -- They're mundane, yet daunting tasks: Tidying a messy room. Assembling a bookshelf from a kit of parts. Fetching a hairbrush for someone who can't do it herself. What if a robot could do it ...

Robots use their hands to 'think'

Oct 18, 2010

Action-centred cognition is a groundbreaking concept in robotics where robots learn to 'think' in terms of what actions they can perform on an object. This new trend in cognition theory opens exciting new ...

Robots learn to handle objects, understand places

Sep 02, 2011

(PhysOrg.com) -- Infants spend their first few months learning to find their way around and manipulating objects, and they are very flexible about it: Cups can come in different shapes and sizes, but they ...

'What can I, Robot, do with that?'

Apr 21, 2008

A new approach to robotics and artificial intelligence (AI) could lead to a revolution in the field by shifting the focus from what a thing is to how it can be used.

Robots learn to pick up oddly shaped objects

May 09, 2012

(Phys.org) -- When Cornell engineers developed a new type of robot hand that could pick up oddly shaped objects it presented a challenge: It was easy for a human operator to choose the best place to take h ...

Robot, object, action!

Oct 29, 2010

Robotic demonstrators developed by European researchers produce compelling evidence that ‘thinking-by-doing’ is the machine cognition paradigm of the future. Robots act on objects and teach themselves ...

Recommended for you

Hitchhiking robot charms its way across Canada

Aug 15, 2014

He has dipped his boots in Lake Superior, crashed a wedding and attended an Aboriginal powwow. A talking, bucket-bodied robot has enthralled Canadians since it departed from Halifax last month on a hitchhiking ...

User comments : 0