Transforming robotics with biologically inspired learning models

Jun 13, 2011 By Aisha Sohail
Aisha Sohail, Heather Ames and Jasmin Leveille run simulations of the artificial visual system in Cog. Credit: Massimiliano Versace, CELEST, Boston University

I walked into the building and there was a human-sized robot waiting to greet me. 

It shook my hand, took my coat and brought me to sit in the room where my interview was going to be held. It asked me whether I needed a drink, and then proceeded to clean the countertops and water the plants. When I asked whether there was a reason it was working so hard, it simply said: "I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do."

If you have ever seen Stanley Kubrick's tribute to humanoid computers, "2001: A Space Odyssey," then you already know I was merely making an allusion ...

This video is not supported by your browser at this time.

What actually happened during my first visit to the Neuromorphics Lab at Boston University was a slightly different, though no less entertaining, scenario.

I walked into an office and there was a Roomba-like approaching and avoiding multicolored objects. It made its decisions based on a reward history ("bad robot" vs. "good robot").

On a desk, I noticed a dismembered radio control (RC) helicopter with half of its parts missing. Peeking into an additional room, I couldn't help but notice a toy car with a camera installed at the helm, and EEG electrodes hanging off on all sides. All around me, researchers were creating and refining artificial brain systems in virtual environments before deploying them in robots.

Even before sitting down to talk with anyone about job opportunities, I knew this was the place for me.

The Neuromorphics Lab is researching innovative robot learning-algorithms. Imagine having a cleaning robot that did what no other cleaning robot is currently able to do: learn. It could learn the one place in your house where your dog always loves to wipe his grubby little paws when he comes inside. It could learn that Tuesdays are softball practice, which means a certain trail of dirt leading up to your room.

The keyword here, obviously, is learning. The problem with the conventional approach to robotics is that it requires explicit programming for robots to carry out specific tasks, leading to a lack of autonomous, general purpose artificial intelligence, or AI.

Working in collaboration with Hewlett-Packard (HP) laboratories, the Neuromorphics Lab, part of the National Science Foundation- (NSF) sponsored Center of Excellence for Learning in Education, Science and Technology (CELEST), has undertaken the ambitious project of creating a brain on a chip--a fundamental predecessor to the design of autonomous robotics and general intelligence.

Researchers in the Neuromorphics Lab are closer than ever to being able to accomplish the goal of creating a general mammalian-type intelligence. Most people have never even heard of the term "neuromorphic"--which is a technology with a specific form ("morphic") that is based on brain ("neuro") architecture.  The neural models being developed by the Neuromorphics Lab implement "whole brain systems," or large-scale brain models that allow virtual and robotic agents to learn on their own to interact with new environments.

Like any intelligent biological system, artificial-autonomous and adaptive systems need three things: a mind, a brain and a body. The CELEST models run on a software platform called Cog, which serves as the operating system within which the artificial "brain" is developed.

Along with the hardware--currently general-purpose processors to be augmented by innovative nanotechnologies under development at HP--Cog offers an ideal environment for the design and testing of whole-brain simulation. The work of the Neuromorphics Lab focuses primarily on engineering the mind of the adaptive system. Once complete, a virtual animat, equipped with the artificial brain, will be able to learn how to navigate in its environment based on its inherent capabilities for responding to motivations, evaluating sensory data, and making intelligent decisions that are transformed into motor outputs.

As a new employee of the Neuromorphics Lab, I recently participated in a demonstration of the adaptive robot. I watched as it was able to learn to distinguish and develop a preference for a set of multicolored blocks. Although this may seem like a trivial task, one that comes naturally to humans, the immensity of this task lies in the fact that the animat is not explicitly programmed to approach certain colored blocks, but rather to learn which objects to approach and avoid based on rewards and punishments associated with them. The process is similar to how animals learn by trial and error to interact with a world they were not "pre-programmed" to act upon.

Whole-brain systems are difficult to engineer and test. The Neuromorphics Lab accelerates these processes by training the animat brain in virtual environments. Not being bounded by a physical substrate such as a robot, researchers are able to test thousands of different brains in parallel on high-performance computing resources, such as NSF's TeraGrid, and use the best versions on the robot. The platform the developes selected is the iRobot Create, a robot that looks a lot like the Roomba vacuum-cleaning robot.

Since the animat is not explicitly programmed to solve specific tasks, there is greater flexibility for the robot's prospective functions.  Eventually, it will function on an autonomous level and be able to take on more complex adaptive tasks such as intelligently interacting and caring for the elderly, autonomously exploring and collecting samples on an alien planet, and generally employing more humanoid behavior.

This is a challenge for any artificial intelligence program under development: it is simply impossible to program a lifetime's set of knowledge into a robot! That is why it is so important for the next generation of artificial intelligence to be able to learn throughout a lifetime without needing constant reprogramming.

Science fiction is rife with examples of learning robots and HAL 9000 from Kubrick's "2001 Space Odyssey" will forever come to mind as the media's favorite malfunctioning . Although confident about the advent of general intelligence machines in the near future, researchers at the Neuromorphics Lab are optimistic that misbehaving robots like HAL will live only in science fiction movies. Future robots will not be programmed, but will be trained.  The key is to educate them well!

Explore further: From cognition to control: Fundamental research continues to advance cooperative robots

Related Stories

Researchers unveil whiskered robot rat

Jun 30, 2009

A team of scientists have developed an innovative robot rat which can seek out and identify objects using its whiskers. The SCRATCHbot robot will be demonstrated this week at an international workshop looking ...

Emotional robot pets

Sep 17, 2010

Designers of robot pets are fighting a never-ending battle with consumers to provide entertaining and realistic gadgets that respond to human interaction in ever more nuanced ways, mimicking the behavior of real pet animals ...

'The robots are coming'

Jan 29, 2009

Alexander Stoytchev and his three graduate students recently presented one of their robot's long and shiny arms to a visitor. Here, they said, swing it around.

Robots could improve everyday life, do chores

Sep 21, 2010

(PhysOrg.com) -- They're mundane, yet daunting tasks: Tidying a messy room. Assembling a bookshelf from a kit of parts. Fetching a hairbrush for someone who can't do it herself. What if a robot could do it ...

Recommended for you

Robots put to work on e-waste

Nov 18, 2014

UNSW researchers have programmed industrial robots to tackle the vast array of e-waste thrown out by Australians every year.

Social robots helping children with diabetes

Nov 14, 2014

Social robots are helping diabetic children accept the nature of their condition and become more confident about their futures, scientists have announced following a four-and-a-half year research study.

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

Isaacsname
not rated yet Jun 13, 2011
Where's the cat ? If you have a Roomba, you must also have a cat.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.