'Simplified' brain lets the iCub robot learn language (w/ Video)

February 19, 2013, Institut National de la Sante et de la Recherche Medicale

'Simplified' brain lets the iCub robot learn language (w/ Video)
(Phys.org)—This technological prowess was made possible by the development of a "simplified artificial brain" that reproduces certain types of so-called "recurrent" connections observed in the human brain.

The artificial enables the to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the journal.

INSERM and CNRS researchers and the Université Lyon 1 have succeeded in developing an "artificial " constructed on the basis of a fundamental principle of the workings of the human brain, namely its ability to learn a new language. The model was developed after years of research in the INSERM 846 Unit of the Institut de recherche sur les cellules souches et cerveau, through studying the structure of the human brain and understanding the mechanisms used for learning.

One of the most remarkable aspects of language-processing is the speed at which it is performed. For example, the human brain processes the first words of a sentence in real time and anticipates what follows, thus improving the speed with which humans process information. Still in real time, the brain continually revises its predictions through interaction between new information and a previously created context. The region inside the brain linking the and the plays a crucial role in this process.

Based on this research, Peter Ford Dominey and his team have developed an "artificial brain" that uses a "neuronal construction" similar to that used by the . Thanks to so-called recurrent construction (with connections that create locally recurring loops) this artificial brain system can understand new sentences having a new grammatical structure. It is capable of linking two sentences and can even predict the end of a sentence before it is provided.

To put this advance into a real-life situation, the INSERM researchers incorporated this new brain into the iCub humanoid robot.

In a video demonstration, a researcher asks the iCub robot to point to a guitar (shown in the form of blue object) then asking it to move a violin to the left (shown by a red object). Before performing the task, the robot repeats the sentence and explains that it has fully understood what it has been asked to do.

For researchers, the contribution that this makes to research into certain diseases is of major importance. This system can be used to understand better the way in which the brain processes language. "We know that when an unexpected word occurs in a sentence, the brain reacts in a particular way. These reactions could hitherto be recorded by sensors placed on the scalp", explains Peter Ford Dominey. The model developed by Dr Xavier Hinaut and Dr Peter Ford Dominey makes it possible to identify the source of these responses in the . If this model, based on the organisation of the cerebral cortex, is accurate, it could contribute to possible linguistic malfunctions in Parkinson's disease.

This research has another important implication, that of contributing to the ability of robots to learn a language one day. "At present, engineers are simply unable to program all of the knowledge required in a robot. We now know that the way in which robots acquire their knowledge of the world could be partially achieved through a learning process – in the same way as children", explains Peter Ford Dominey.

Explore further: iCub, the Toddler Robot (w/ Videos, Pictures)

Related Stories

iCub, the Toddler Robot (w/ Videos, Pictures)

September 9, 2009

(PhysOrg.com) -- A little humanoid robot called iCub is learning how to think for itself, bringing the world of science fiction to reality. The major goal of the "RobotCub" project is to study how humans learn and think, ...

How the brain copes in language-impaired kids

March 12, 2008

Researchers at UCL (University College London) have discovered that a system in the brain for processing grammar is impaired in some children with specific language impairment (SLI), but that these children compensate with ...

Recommended for you

Nanoscale Lamb wave-driven motors in nonliquid environments

March 19, 2019

Light driven movement is challenging in nonliquid environments as micro-sized objects can experience strong dry adhesion to contact surfaces and resist movement. In a recent study, Jinsheng Lu and co-workers at the College ...

OSIRIS-REx reveals asteroid Bennu has big surprises

March 19, 2019

A NASA spacecraft that will return a sample of a near-Earth asteroid named Bennu to Earth in 2023 made the first-ever close-up observations of particle plumes erupting from an asteroid's surface. Bennu also revealed itself ...

Levitating objects with light

March 19, 2019

Researchers at Caltech have designed a way to levitate and propel objects using only light, by creating specific nanoscale patterning on the objects' surfaces.


Adjust slider to filter visible comments by rank

Display comments: newest first

5 / 5 (1) Feb 19, 2013
1 / 5 (1) Feb 19, 2013
So to make a computer more human, first it has to be simplified.
That makes sense.
Evolution would be parsimonious with it's resources.
5 / 5 (1) Feb 19, 2013
I had trouble understanding what he wanted.
3 / 5 (2) Feb 20, 2013
I had trouble understanding what he wanted.
Its just a variation of grammar, quite a few European languages have this to a degree and many can accommodate both, ie:

"This lamp post is blue" is equivalent to:-

"It is blue, this lamp post"

The sentence the guy spoke to the robot didnt have a comma, humans recognise this well with background of having heard other languages though the robot is programmed to listen to the slight pause (if any) where the comma would have been had it been typed.
5 / 5 (1) Feb 20, 2013
This is a clearest case of real reasonably free form verbal communication between a machine and a person that I have seen.


In 20 years this will be common place. faster, more comprehensive in terms of knowledge and vocabulary, and more fluid to the point of approaching human to human communication.

The interesting thing is, that once you train a machine once, it's knowledge is now available to equivalently structured machines.

Train one, and all can know.

Still, there is the problem of sensory modeling that will be required if machines are to grok the world around them rather than simply churn through a database of factoids.

This will take significantly longer.
2 / 5 (1) Feb 20, 2013
@vendicarE, Within ten years it will be common. The more breakthroughs we achieve, the more breakthroughs we can achieve (unless somebody holds the technology to themselves for whatever reason)
1 / 5 (1) Feb 20, 2013
Another Test: Making it aware of 20 Sentences Each of 10 Languages - Then asking it WHAT LANGUAGE IT HAS HEARD!
1 / 5 (1) Feb 21, 2013
Failed; both actions ocurred to same object when two objects were specified in instructions but it still demonstrates an advance in speech recognition and AI.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.