Robot NICO learning self awareness using mirrors

Aug 24, 2012 by Bob Yirka report
tRobot NICO learning self awareness using mirrors
Credit: Yale University

(Phys.org)—Self awareness is one of the hallmarks of intelligence. We as human beings clearly understand that we are both our bodies and our minds and that others perceive us in ways differently than we perceive ourselves. Perhaps nowhere is this more evident than when we look in a mirror.

In so doing we understand that the other person looking back, is really the three dimensional embodiment of who we really are as a complete person. For this reason, researchers use something called the mirror test as a means of discerning other animals' level of . They put a mark of some sort on the face without the animal knowing it, then allow the animal to look in a mirror; if the animal is able to comprehend that the mark is on its own face, and demonstrates as much by touching itself where it's been marked, than the animal is deemed to have self awareness. Thus far, very few have passed the test, some apes, dolphins and elephants. Now, researchers at Yale University are trying to program a robot that is able to pass the test as well.

The robot's name is NICO, and has been developed by Brian Scassellati and Justin Hart, who together have already taught the robot to recognize where its arm is in three to a very fine degree, a feat never before achieved with a robot of any kind. The next step is to do the same with other body parts, the feet, legs torso and of course eventually the head, which is the most critical part in giving a robot self awareness, which is the ultimate goal of the project.

Programming a to have self awareness is considered to be one of the key milestones to creating robots that are truly useful in everyday life. Robots that "live" in people's homes for example, would have to have a very good understanding of where every part of itself is and what it's doing in order to prevent causing accidental harm to housemates. This is so because the movements of people are random and haphazard, so much so that people quite often accidently bump into one another. With robots, because they are likely to be stronger, such accidents would be unacceptable.

Scassellati and Hart believe they are getting close and expect NICO to be able to pass the mirror test within the next couple of months. No doubt others will be watching very closely, because if they meet with success it will be a truly historic moment.

Explore further: C2D2 fighting corrosion

Related Stories

Close encounters: When Daniel123 met Jane234 (w/ video)

Jan 04, 2012

(PhysOrg.com) -- Qbo robots created a stir recently when their developers succeeded in demonstrating that a Qbo can be trained to recognize itself in the mirror. Now the developers have taken their explorations ...

iRobot planning an Android-based robot

May 12, 2011

(PhysOrg.com) -- iRobot is working on robots that have the brains of an Android tablet. The goal is an Android-based tablet that is able to see the world around it, hear input from humans, respond and think ...

Teaching robots to move like humans (w/ Video)

Mar 07, 2011

When people communicate, the way they move has as much to do with what they're saying as the words that come out of their mouths. But what about when robots communicate with people? How can robots use non-verbal ...

Recommended for you

C2D2 fighting corrosion

15 hours ago

Bridges become an infrastructure problem as they get older, as de-icing salt and carbon dioxide gradually destroy the reinforced concrete. A new robot can now check the condition of these structures, even ...

Meet the "swarmies"- robotics' answer to bugs

21 hours ago

(Phys.org) —A small band of NASA engineers and interns is about to begin testing a group of robots and related software that will show whether it's possible for autonomous machines to scurry about an alien ...

Hitchhiking robot reaches journey's end in Canada

Aug 21, 2014

A chatty robot with an LED-lit smiley face sent hitchhiking across Canada this summer as part of a social experiment reached its final destination Thursday after several thousand kilometers on the road.

User comments : 41

Adjust slider to filter visible comments by rank

Display comments: newest first

Noumenon
3.5 / 5 (74) Aug 24, 2012
If it passes the test, it would mean that it was programmed to pass the test, NOT that it was "self aware" in any sense of being conscious of itself,..... which seems to be the attempted meaning above. In comparison, it would seem rather trivial to program it to identify an image through it's cameras and use that data as instructions to perform pre-programmed tasks, like removing an object placed on it's head.
SoylentGrin
3.6 / 5 (5) Aug 24, 2012
it would seem rather trivial to program it to identify an image through it's cameras


This is hardly trivial. Pattern and object recognition is quite a challenge in programming, especially when you want the robot to perform in environments that weren't available to the programmers.
Additionally, getting the robot to make the link betweeen "That's Object X" and "That Object is me, and under my control, and not something that will move on its own without me doing it" is fairly sophisticated.
gerrit_harteveld
not rated yet Aug 24, 2012
Does anybody know what the abbreviation NICO stands for?
Noumenon
3.5 / 5 (71) Aug 24, 2012
This is hardly trivial.

I said "In comparison, [i.e. to programming self-awareness], it would seem rather trivial"

Pattern and object recognition is quite a challenge in programming, especially when you want the robot to perform in environments that weren't available to the programmers.


It's not a generalized object recognition problem; it only has to identify a known image with known visual queues.

getting the robot to make the link betweeen "That's Object X" and "That Object is me, and under my control, and not something that will move on its own without me doing it" is fairly sophisticated.


There's no "me" recognition there. There's no "me" awareness in that robot at all. Therefore wrt AI, it is not sophisticated at all.

It merely identifies known visuals queues as a source of data to operate on, then moves and tracts to verify the image can be used as such feedback data. That's it.

It has zero understanding that the image is itself.

Noumenon
3.5 / 5 (73) Aug 24, 2012
As applied to a machine the above mirror test, like the Turing test, merely gauges its ability to give the 'Appearance' of self-awareness or intelligence. Such is the rudimentary state that AI is in, that it relies on such a standard in tricking one to believe it possess self-awareness.

Of course if humans understood consciousness, then it could in principal be modeled,... but we don't,.. at all, thus, the above phrasing is fraudulent, or the definitions (i.e. self-awareness) are reduced to uselessness, in an effort to lower the bar for achievements sake.
SoylentGrin
3.6 / 5 (7) Aug 24, 2012
There's no "me" recognition there. There's no "me" awareness in that robot at all. Therefore wrt AI, it is not sophisticated at all.


When the robot sees objects in its visual field, it has to follow rules on how to interact with those objects. It would also have to follow different rules on what to do if the object it's looking at is itself.
For instance, if the rule is, "When an obstacle is in your path, wait patiently for it to be removed." it would have to follow a different rule if the object is its own arm.
Classifying objects is hard enough. Knowing the difference between external objects and oneself in order to act differently is the essence of self-awareness.
Nobody said this is going to be up for Turing tests, or is the ultimate end-product of AI research. It's an incremental step in that direction, and is pretty cool in and of itself. Being able to discern what is oneself and what's other than oneself IS self-awareness. You're thinking "consciousness".
Tektrix
5 / 5 (1) Aug 24, 2012
The real test is to see how it responds to being asked what it feels like to be self-aware.
Noumenon
3.4 / 5 (72) Aug 24, 2012
Knowing the difference between external objects and oneself in order to act differently is the essence of self-awareness.


That is true, but it is patently false that the above robot "knows" anything of the kind.

To the robot, the data in question (visual images of the robot) is no different than any other visual data with respect to how the software operates at a fundamental level,... which identifies a known object, then say moves its arm to validate that the image is the source of data which will be used to execute the "it's me" code.

The only difference is in the mind of the person being fooled into thinking the robot is self-aware.

And yes, I'm thinking of consciousness, because self-awareness presumes consciousness by definition.

No one understands how self-awareness works, thus, no one can properly model it. Because of this fact, the standards of A.I. has been lowered to the extent of merely fooling one into believing it based on appearance, not actual fact.
Noumenon
3.4 / 5 (73) Aug 24, 2012
....If such is the standards for claiming a.i. self-awareness, or for using such terminology, then simple servo systems, which have been around for decades, can be said to be 'self-aware' as they respond based on their own current state.

The above robot is not doing anything more sophisticated, just with more complex input data.
antialias_physorg
4.3 / 5 (7) Aug 24, 2012
If it passes the test, it would mean that it was programmed to pass the test, NOT that it was "self aware" in any sense of being conscious of itself,

If you pass the test, it would mean that you were programmed (by your DNA) to pass the test, NOT that you are "self aware" in any sense of being conscious of yourself.

They are not hard coding a 'self recognition' module. If it really realizes that its mirror image is itself then thet will be awesome.
SoylentGrin
5 / 5 (1) Aug 24, 2012
They are not hard coding a 'self recognition' module. If it really realizes that its mirror image is itself then thet will be awesome.

That's true, AP. If this is accomplished with neural nets, for instance, which "behave" rather than following a program, and it behaves differently when it recognizes itself rather than an external object, that would be something.
baudrunner
1 / 5 (2) Aug 24, 2012
There's probably a pretty thin line between DNA code and computer code, at least in principle. Therefore, the most difficult aspect of the program that would be required is for the software to decide, "that is me", based on determining that an image making duplicate movements is seen on a reflecting surface versus a monitor showing a real-time video image. Getting NICO to determine that "that is me" on a recording of itself making movements not duplicating current motion, or recognizing itself in a still image, is necessary for full self-awareness otherwise the job is only partly done. Notice I didn't put quotes around the words self-awareness this time because that is really all that our DNA sub-routines are doing because we are essentially biological entities doing the same thing as the robot.
extinct
4.5 / 5 (4) Aug 24, 2012
"Robot NICO learning self awareness using mirrors"
??? I rather doubt it! chances are that whoever wrote that headline has *really* low standards as to what constitutes self-awareness and what doesn't. the first thing you will need is 100,000,000,000 neurons, with as many as 50,000 connections per neuron to other neurons. then you'll need warm & wet quantum entanglement. then... well, then we'll talk.
extinct
not rated yet Aug 24, 2012
Does anybody know what the abbreviation NICO stands for?

Yale's website doesn't appear to suggest any acronym. whereas this article here calls the robot "NICO", Yale themselves call it "Nico"
Thrasymachus
2.8 / 5 (5) Aug 24, 2012
If the "appearance" of self-awareness can be modeled and reconstructed from that model, what's the practical difference between that constructed appearance of self-awareness and the appearance of self-awareness in a natural organism? That's a rhetorical question, by the way.

If you're denying the possibility of modeling the appearance of self-awareness for Kantian reasons, Kant would disagree with you. The appearance of anything can in principle be modeled and reconstructed in Kantian epistemology, indeed, the appearance itself is a model and reconstruction, which is why it lends itself to further model-building.

If you're arguing that self awareness as some kind of ding-an-sich can never be modeled, then you're not saying anything important or relevant, because nobody cares about the ding-an-sich. What matters are the appearances and the models of those appearances that allow us to predict and control future appearances.
Monshat
1.8 / 5 (5) Aug 24, 2012
If something becomes self-aware, then it immediately achieves a human equivalency and should be treated as such--whether a human, an animal or a collection of circuitry in a metal case. It occurs to me that this might be the ethical and logical point at which a human foetus becomes a person. Perhaps abortion laws should be refashioned so that civilized society stays on the safe side of that point.
Eikka
4 / 5 (4) Aug 24, 2012
what's the practical difference between that constructed appearance of self-awareness and the appearance of self-awareness in a natural organism? That's a rhetorical question, by the way.


It shouldn't be, because we don't know what the actual difference between awareness is, so we can't infer that the appearance is the same thing.

After all, appearances hold only to the extent that we can observe them. That's why stage magic works.
Noumenon
3.5 / 5 (69) Aug 24, 2012
@Thrasymachus, AA,

I never mentioned Kant. I never suggested that self-awareness could not be modeled in principal. In fact I stated above,... "Of course if humans understood consciousness [how it worked], then it could in principal be modeled".

If the "appearance" of self-awareness can be modeled and reconstructed from that model, what's the practical difference between that constructed appearance of self-awareness and the appearance of self-awareness in a natural organism?


If the practical purpose is to fool elementary children and the computer illiterate, then there is no substantive difference.
Noumenon
3.5 / 5 (69) Aug 24, 2012
,... however, if one assumes the above researchers are more serious than that,.. then an appearance of self-awareness does not imply the thing is actually self-aware.

Following Descartes, We know that self-awareness is not merely a matter of appearances, as we experience it as a conscious reality, Cogito ergo sum. That's the difference.

The robot faking it, does not equate to conscious self-awareness, as a molecule modeled in a computer does not equate to being a real molecule.

I'm objecting to the terminology used here, and the idea that a.i. programmers can in principal cause consciousness and self-awareness to magically occur without first understanding how it occurs in nature.
kochevnik
3 / 5 (2) Aug 24, 2012
then you'll need warm & wet quantum entanglement. then... well, then we'll talk.
And quantum entrainment
Torbjorn_Larsson_OM
1 / 5 (1) Aug 24, 2012
Interesting that failures of human exceptionality get peoples shackles up. And hey - exciting that a robot soon will a an "own" Turing test!

@ Noumenon:

"If it passes the test, it would mean that it was programmed to pass the test, NOT that it was "self aware"".

It would be self aware in the same sense that apes including us, elephants and dolphins are self aware, by having the explicit trait. This is the whole point with Turing tests, because there is no other measurable difference.

Also, much of what these robots do are embodied and outside their "program", in the same sense that our own self awareness is not genetically described but follows after development and learning, most of which is external information.

For example, the limbs movement characteristics, the visual perception and really the whole mirror setup is not a programmed part of the experiment.
Torbjorn_Larsson_OM
2 / 5 (1) Aug 24, 2012
[cont]

"We know that self-awareness is not merely a matter of appearances, as we experience it as a conscious reality, Cogito ergo sum."

None of that is part of the testable definition, see the article. This is making shit up. You, and Kant, are Not Even Wrong.

Btw, "consciousness" is notorious for lacking definition. If we mean awareness vs sleep, or emotional behavior, nematodes have a sleep analog and tiring equivalents. So they are by that definition "conscious". The model organism nematode has less than 1000 cells, and not all neurons, so that is certainly within today's robots abilities. (Say 100 neurons, which can have ~ 100^2 or 10 000 connections.)

@ baudrunner:

"There's probably a pretty thin line between DNA code and computer code,".

Two problems, our behavior isn't controlled by genes as much as our abilities are (most of development means genes taking advantage of chemistry self-organization) and DNA "code" is technically the triple code for transcribing amino acids.
kochevnik
1 / 5 (3) Aug 24, 2012
Sentience cannot occur merely in a simple feedback loop. The chaotic feedback must provide a fractal scaffolding onto which perception and thought are nucleated. All living things define a self. The self is about opening up to reward and growth and withdrawing from pain and antibiotic environments. The human self reflects the aggregate desires of a trillion cells and their composite demands for extroversion/introversion. Such a complex and interwoven self is not a simple feedback loop.
Noumenon
3.5 / 5 (68) Aug 25, 2012
"We know that self-awareness is not merely a matter of appearances, as we experience it as a conscious reality, Cogito ergo sum."


None of that is part of the testable definition, see the article. This is making shit up. You, and Kant, are Not Even Wrong.


I never referenced Kant nor mentioned him,... are you making shit up?

The above mirror test does not test for self-awareness as that term is generally taken to imply,... so use of this phrase is fraudulent. If the term 'self-awareness' is reduced to meaning merely what a 4 year old would think of a puppet, then it is reduced to mislead. Use a different term for it, like mechanical feedback response, or something.

The robot does not 'know' or 'understand' that the mirror image it records is 'itself',... it merely responds to that image as it would to any other image, as it's programmed to do,... as data to react to.
Noumenon
3.6 / 5 (69) Aug 25, 2012
This is the whole point with Turing tests, because there is no other measurable difference.

Exactly, there is no measurable difference, because there is no understanding of how self-awareness and consciousness manifests in nature. The Turing test and Mirror test at its core, are designed to FOOL one into believing that consciousness exists in the subject. This is entirely different than detecting it as existing. The differences, to detect rather than fool, requires an understanding of how it actually occurs and works in nature.

Btw, "consciousness" is notorious for lacking definition.


Of course it lacks definition if it is not understood how it works or comes about.
Deathclock
2.7 / 5 (6) Aug 25, 2012
Maybe I'm wrong but I don't think it's as simple as programming the robot to recognize an image and identify that image as "self"... I have a feeling they write the code such that the robot can identify objects and by observing the equal and opposite movement of it's mirror image it is able to determine, outside of it's explicit programming, that it controls the image in the mirror directly be controlling itself...

There is no real difference between that and what we do, if you think about it. We look down and see our hands and we recognize them as ours BECAUSE we have and do observe that we can directly affect their position in space. There are various mental illnesses where people reject their own body parts and even go so far as to attempt to cut them off of themselves... I'm not exactly sure what that tells you about the ability to recognize "self" but I am sure it is significant in some way...
baudrunner
1 / 5 (3) Aug 25, 2012
@ baudrunner:
"There's probably a pretty thin line between DNA code and computer code,".

Two problems, our behavior isn't controlled by genes as much as our abilities are (most of development means genes taking advantage of chemistry self-organization) and DNA "code" is technically the triple code for transcribing amino acids.
..which is how it works for us. We are assemblages of amino acid proteins which manifest the sympathetic and para-sympathetic autonomous nervous system responses as programmed into those genes. Biological or electronic, our responses are determined by the reactions occurring out of that programming. We are each of us, man and animal, merely subroutines belonging to a universal program in one of the switch-case dimensions of the Great Reality Application (GRA).

Oh, and never quote another out of context. I said, "..in principle."
qitana
not rated yet Aug 25, 2012
Takeno's self-awareness research
Self-awareness in robots is being investigated by Junichi Takeno [2] at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it [3][4], and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005).

(from wikipedia, article: Artifical Consciousness)

Honestly, I'm quite worried about all this, humanity has not a clue how the future will turn out, do we want artificial 'persons' ?
Eikka
3.7 / 5 (3) Aug 25, 2012
There is no real difference between that and what we do, if you think about it. We look down and see our hands and we recognize them as ours BECAUSE we have and do observe that we can directly affect their position in space.


Indeed. It's a problem of categorization and assigning meaning to the categories. "My hand" means something different to me than "Your hand". It's as simple as that.

However, the issue here is that the purely computational machine cannot assign meanings to the categories because it operates on pure symbols that lack context. The categories "me" and "you" are merely labels that point to sets of conditioned/programmed behaviour.

Any context there is will be between the program and its programmer and trainer, so that if the robot behaves differently when looking at its own hand, it is meaningful only to the observer who interprets the action.

The robot doesn't understand its own categories, so it cannot be said to be self-aware.
Eikka
3.3 / 5 (3) Aug 25, 2012
Or if you want to make it even simpler: how can something that lacks a mind have a self?

Claiming that the robot is self-aware is begging the question, because it covertly asserts that the program is a mind.

That's why AI researchers are being criticized as being mere magicians, some of whom even believe in their own tricks.
Deathclock
2.5 / 5 (6) Aug 25, 2012
Or if you want to make it even simpler: how can something that lacks a mind have a self?

Claiming that the robot is self-aware is begging the question, because it covertly asserts that the program is a mind.

That's why AI researchers are being criticized as being mere magicians, some of whom even believe in their own tricks.


At present I agree with you, but if you're asserting that we will never be able to develop sentient machines then I disagree... there is no "magic" that occurs in our brain that cannot be replicated in other machinery, more than likely already possible using our existing technology. It's just a matter of complexity, and creating a machine that can truly learn from experience and make meaningful associations.
Telekinetic
3 / 5 (4) Aug 25, 2012
"Look at NICO. NICO handsome. NICO strong. NICO like lady. NICO more handsome- more strong than man. Lady like NICO. NICO no like man. Terminate man. Terminate. Terminate."
kochevnik
2.7 / 5 (3) Aug 26, 2012
Biological and behavioral complexity emerging from the vortex: https://www.youtu...40B9m4tI
clark_lipkovitz
5 / 5 (1) Aug 27, 2012
If it passes the test, it would mean that it was programmed to pass the test, NOT that it was "self aware" in any sense of being conscious of itself,..... which seems to be the attempted meaning above. In comparison, it would seem rather trivial to program it to identify an image through it's cameras and use that data as instructions to perform pre-programmed tasks, like removing an object placed on it's head.


Do you really think some of the great researchers at Yale are taking snapshots, or pre-programming a robot to point to coordinates in a mirror? Because this article clearly states learning/understanding.
Assuming a Yale researcher is wasting their time with what you have stated is ridiculous....just about any senior programmer could do the work described in your interpretation of this article (other posts).
SatanLover
1 / 5 (1) Aug 27, 2012
true learning is a very complex and interesting topic in computer science... an interesting starting point would be a couple of robots with ears and a "voice" synthesizer that start with static and as time goes on each robot creates there own patterns and strings of 0 1 's and positive / negative feedback pulse. See if they start developing some kind of language.
Eikka
1 / 5 (1) Aug 27, 2012
At present I agree with you, but if you're asserting that we will never be able to develop sentient machines then I disagree... there is no "magic" that occurs in our brain that cannot be replicated in other machinery


Well, it's unlikely, but you don't know there isn't until you know what is actually happening in the brain. After all, there are many things in nature that we can find analogs for, but can't replicate exactly.

Like electricity can be used to model the flow of water in an analog computer, but you can't wash your hands in a stream of electrons.
Deathclock
3 / 5 (2) Aug 28, 2012
Well, it's unlikely, but you don't know there isn't...


I don't know that there isn't magic?

Like electricity can be used to model the flow of water in an analog computer, but you can't wash your hands in a stream of electrons.


This doesn't make any sense... I'm not even sure what you're talking about.

Our brain takes input from each of our five senses... this is the only input it receives from the outside world. It then operates on that input (information) in various ways, including forming memories (which are really just associations between pieces of information) and also triggering output in the form of impulses to muscles to cause movement... That's really all it does, and while it is complicated in practice it is simple in theory. Any manipulation of data that your brain can do so should a standard turing-complete machine (such as any standard computer) be able to do as well.
Deathclock
1 / 5 (1) Aug 28, 2012
Really, if you understand computability theory, you should be able to develop the rule-set to model the interactions inside anyone's brain with rocks on a beach... given a big enough beach and enough rocks and eons of time that is.

Binary can encode ANYTHING... any information that can possibly exist, and boolean logic can describe ANY interaction between any two pieces of information. You can use anything to represent binary, hence the rocks on a beach example that I am partial to.

You can model a computer that does anything with anything... letting the presence of that thing be a logical 1 and the absence of that thing be a logical 0... it just won't operate on it's own, but the circuits can be traversed manually by a human and you will arrive at the same result as you would have if it was implemented in silicon and electrical circuitry, it would just take far longer.
Eikka
1 / 5 (1) Aug 28, 2012
Any manipulation of data that your brain can do so should a standard turing-complete machine


Unless something you do turns out to be non-computational, in which case it can't.

you should be able to develop the rule-set to model the interactions inside anyone's brain with rocks on a beach


Yes, you can if you use the rocks to simulate a brain from the quantum physics up, which you can't because representing the continuous states with discrete pebbles would take an infinite number of them. You can arrive to an approximation, which won't really be the real thing but close enough that you can make some predictions.

That's what I meant with "you can't wash your hands in a stream of electrons". It will replicate some aspects, but not all of the things that you're trying to copy.

Binary can encode ANYTHING... any information that can possibly exist


Trivial counterpoint: pi. There can't be a complete binary representation, unless you count the algorithm as such
jibbles
not rated yet Aug 28, 2012
what's the practical difference between that constructed appearance of self-awareness and the appearance of self-awareness in a natural organism? That's a rhetorical question, by the way.

It shouldn't be, because we don't know what the actual difference between awareness is, so we can't infer that the appearance is the same thing.

After all, appearances hold only to the extent that we can observe them. That's why stage magic works.


actually, according to daniel dennett, consciousness \is very much indeed\ like stage magic. here's one of his talks on it:

http://videos.esc...67DKB8DX
jibbles
not rated yet Aug 28, 2012
i for one welcome the self awareness of nico