Robots use their hands to 'think'

October 18, 2010
Robots use their hands to 'think'

Action-centred cognition is a groundbreaking concept in robotics where robots learn to 'think' in terms of what actions they can perform on an object. This new trend in cognition theory opens exciting new vistas.

Actions speak louder than words, particularly if you are a . At least that is the theory proposed by a major European effort to develop a wholly new approach to robotic cognition.

The PACO-PLUS project sought to test a groundbreaking theory called ‘object-action complexes’ (OACs, pronounced 'oaks'). OACs are units of 'thinking-by-doing'. Essentially, this approach designs software and hardware that allows the to think about objects in terms of the actions that can be performed on the object.

For example, a robot can look at anything. If an object has a handle, the robot can grasp it, too. If it has an opening, the robot can potentially fit something into the opening or fill it with liquid. If it has a lid or a door, the robot can potentially open it.

Thus, objects gain their significance by the range of possible actions a robot can execute upon them. This opens up a much more interesting way for robots to think autonomously, because it fosters the possibility of emergent behaviour, complex behaviours which arise spontaneously as a consequence of quite simple rules.

Absurdly simple complexity

Our universe demonstrates astounding complexity from a handful of universal constants and DNA consists of just four bases, but from these all lives emerge. Researchers at PACO-PLUS hope to imitate to some degree that level of complexity, the complexity that arises from the absurdly simple.

In some respects, their approach imitates the learning processes of young infants. As they encounter a new object, infants will try to grasp it, eat it, or bang it against something else. As they learn from trial and error that, for example, a round peg will fit into a round hole, the range of actions enlarges.

Watching other people, too, adds to a child's understanding and next the child starts using actions in combination, such as grasping a door handle and then twisting it, to accomplish a more complex goal.

PACO-PLUS takes advantage of all these proven strategies to enable robots to teach themselves by learning from their observations and their experience. As a key part of that strategy, PACO PLUS conducted most of its work with Humanoid robots, robots shaped like people.

“Humanoid robots are artificial embodiments with complex and rich perceptual and motor capabilities, which make them… the most suitable experimental platform to study cognition and information-processing” explains Tamim Asfour, leader of the Humanoids Research Group at the Institute for Anthropomatics at Karlsruhe Institute of Technology (KIT) Germany, and co-coordinator of the PACO-PLUS project.

I am therefore I think

"Our work follows on from Rodney Brooks who was the first to explicitly state that cognition is a function of our perceptions and our ability to interact with our environment. In other words, cognition arises from our embodied and situated presence in the environment."

Brooks, who published his most influential work in the 1980s, believed that moving and interacting with the environment were the difficult problems in biological evolution; once a species achieved that, it was relatively easy to ‘evolve’ the high-level symbolic reasoning of abstract thought. Brooks believed that disembodied intelligence was an impossible problem to solve.

This reverses the approach taken by ‘artificial intelligence’. AI believes if you develop enough intelligence, machine thought will be able to perceive and solve problems; robotic cognition believes that if you develop useful perception and interaction, intelligence will emerge spontaneously.

The jury is still out on who is right, but the robotic cognition school has biology on its side, and now it has the work of the PACO-PLUS project, too.

While progressing, there are no genuine I, Robot candidates on the scene yet. That Hollywood interpretation is still a ways off, but the applications and demonstrators built by PACO-PLUS show that we are now, perhaps, on the right track.

Explore further: As robots learn to imitate

More information: This is the first of a two-part PACO-PLUS feature. Part 2.

Related Stories

As robots learn to imitate

December 22, 2004

Can robots learn to communicate by studying and imitating humans' gestures? That's what MIRROR's researchers aimed to find out by studying how infants and monkeys learn complex acts such as grasping and transferring it to ...

'What can I, Robot, do with that?'

April 21, 2008

A new approach to robotics and artificial intelligence (AI) could lead to a revolution in the field by shifting the focus from what a thing is to how it can be used.

Piecing together the next generation of cognitive robots

May 5, 2008

Building robots with anything akin to human intelligence remains a far off vision, but European researchers are making progress on piecing together a new generation of machines that are more aware of their environment and ...

Robotic perception, on purpose

October 26, 2009

(PhysOrg.com) -- European researchers developed technology that enables a robot to combine data from both sound and vision to create combined, purposeful perception. In the process, they have taken the field to a new level.

Recommended for you

Facebook ready to test giant drone for Internet service

July 30, 2015

Facebook says it will begin test flights later this year for a solar-powered drone with a wingspan as big as a Boeing 737, in the next stage of its campaign to deliver Internet connectivity to remote parts of the world.

Power grid forecasting tool reduces costly errors

July 30, 2015

Accurately forecasting future electricity needs is tricky, with sudden weather changes and other variables impacting projections minute by minute. Errors can have grave repercussions, from blackouts to high market costs. ...

4 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

DamienS
not rated yet Oct 18, 2010
I like this approach. Hopefully the underlying software is flexible enough to evolve increasingly complex behaviours. But also, the success of this approach will be highly dependent on the mobility, dexterity and the range of sensory inputs of the hardware, so that the robot can interact with objects and its environment in diverse ways.

I never thought a disembodied intelligence (brain in a box) approach would ever work in developing a true AI.
Quantum_Conundrum
not rated yet Oct 18, 2010
In order to maximize the chance of success with this approach you need maximum sensory input in all senses and all spectra of sensation.

You also need some sort of safety mechanism, because this approach to A.I. creates a learning engine in a moral vaccuum. A humanoid A.I. of this type may accidentaly or intentionally hurt a human being because there is no moral relevance, as it would simply view all things as "objects" to be experimented with through interaction.
TechnoCore
not rated yet Oct 19, 2010
@quantum: I wouldn't start out by giving it sharp knives to play with in front of human test subjects ;)

When/if in a couple of years it has mastered the basics of how the physical world works... it can be trained on concepts like cause and effect on more abstract matters.. like that there will be consequences if you act against socially accepted rules. But I guess that's still far away :)
migmigmig
not rated yet Nov 06, 2010
Uh, an infant human is a "learning engine in a moral vacuum."

As with humans, morality would need to be programmed in from the outside.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.