Robots that understand contextual commands

August 31, 2017 by Adam Conner-Simons
ComText allows robots to understand contextual commands such as, “Pick up the box I put down.” Credit: Tom Buehler/MIT CSAIL

Despite what you might see in movies, today's robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.

For example, if you put a specific tool in a toolbox and ask a robot to "pick it up," it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the "it" in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.

Recently researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making this type of request easier: In a new paper, they present an Alexa-like system that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments. They've dubbed the system "ComText," for "commands in context."

The toolbox situation above was among the types of tasks that ComText can handle. If you tell the system that "the tool I put down is my tool," it adds that fact to its knowledge base. You can then update the robot with more information about other objects and have it execute a range of tasks like picking up different sets of objects based on different commands.

"Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors," says CSAIL postdoc Rohan Paul, one of the lead authors of the paper. "This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say."

The team tested ComText on Baxter, a two-armed humanoid robot developed for Rethink Robotics by former CSAIL director Rodney Brooks.

The project was co-led by research scientist Andrei Barbu, alongside research scientist Sue Felshin, senior research scientist Boris Katz, and Professor Nicholas Roy. They presented the paper at last week's International Joint Conference on Artificial Intelligence (IJCAI) in Australia.

How it works

Things like dates, birthdays, and facts are forms of "declarative ." There are two kinds of : semantic memory, which is based on general facts like the "sky is blue," and episodic memory, which is based on personal facts, like remembering what happened at a party.

Most approaches to robot learning have focused only on semantic memory, which obviously leaves a big knowledge gap about events or facts that may be relevant context for future actions. ComText, meanwhile, can observe a range of visuals and natural language to glean "episodic memory" about an object's size, shape, position, type and even if it belongs to somebody. From this , it can then reason, infer meaning and respond to commands.

"The main contribution is this idea that robots should have different kinds of memory, just like people," says Barbu. "We have the first mathematical formulation to address this issue, and we're exploring how these two types of memory play and work off of each other."

The Baxter robot picks up a block using the ComText system. Credit: Tom Buehler/MIT CSAIL

With ComText, Baxter was successful in executing the right command about 90 percent of the time. In the future, the team hopes to enable robots to understand more complicated information, such as multi-step commands, the intent of actions, and using properties about objects to interact with them more naturally.

For example, if you tell a robot that one box on a table has crackers, and one box has sugar, and then ask the robot to "pick up the snack," the hope is that the could deduce that sugar is a raw material and therefore unlikely to be somebody's "snack."

By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.

"This work is a nice step towards building robots that can interact much more naturally with people," says Luke Zettlemoyer, an associate professor of computer science at the University of Washington who was not involved in the research. "In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask."

Explore further: Robot uses social feedback to fetch objects intelligently

More information: Rohan Paul et al. Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (2017). DOI: 10.24963/ijcai.2017/629

Related Stories

Robot uses social feedback to fetch objects intelligently

March 6, 2017

If someone asks you to hand them a wrench from a table full of different sized wrenches, you'd probably pause and ask, "which one?" Robotics researchers from Brown University have now developed an algorithm that lets robots ...

Robots teach other robots

May 10, 2017

Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require ...

New system learns how to grasp objects

June 9, 2017

Researchers at Bielefeld University have developed a grasp system with robot hands that autonomously familiarizes itself with novel objects. The new system works without previously knowing the characteristics of objects, ...

Recommended for you

US faces moment of truth on 'net neutrality'

December 14, 2017

The acrimonious battle over "net neutrality" in America comes to a head Thursday with a US agency set to vote to roll back rules enacted two years earlier aimed at preventing a "two-speed" internet.

FCC votes along party lines to end 'net neutrality' (Update)

December 14, 2017

The Federal Communications Commission repealed the Obama-era "net neutrality" rules Thursday, giving internet service providers like Verizon, Comcast and AT&T a free hand to slow or block websites and apps as they see fit ...

The wet road to fast and stable batteries

December 14, 2017

An international team of scientists—including several researchers from the U.S. Department of Energy's (DOE) Argonne National Laboratory—has discovered an anode battery material with superfast charging and stable operation ...

1 comment

Adjust slider to filter visible comments by rank

Display comments: newest first

luke_w_bradley
not rated yet Sep 02, 2017
Really interesting stuff, the combination of shared environment with language is the only real way to resolve semantics, (e.g. "look at that!") and thus create things like natural language programming. But why on earth use that Baxter robot??? If this research were demonstrated through an agent in a video game, something like garry's mod where you can point, drop things etc, everyone could play around with it and tweak, so much fun!

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.