Self-awareness not unique to mankind

June 15, 2015
rat

Humans are unlikely to be the only animal capable of self-awareness, a new study has shown.

Conducted by University of Warwick researchers, the study found that humans and other animals capable of mentally simulating environments require at least a primitive sense of self. The finding suggests that any animal that can simulate environments must have a form of self-awareness.

Often viewed as one of man's defining characteristics, the study strongly suggests that self-awareness is not unique to mankind and is instead likely to be common among animals.

The researchers, from the University of Warwick's Departments of Phycology and Philosophy, used thought experiments to discover which capabilities animals must have in order to mentally simulate their environment.

Commenting on the research Professor Thomas Hills, study co-author from Warwick's Department of Psychology, said:

"The study's key insight is that those animals capable of simulating their future actions must be able to distinguish between their imagined actions and those that are actually experienced".

The researchers were inspired by work conducted in the 1950s on maze navigation in rats. It was observed that rats, at points in the maze that required them to make decisions on what they would do next, often stopped and appeared to deliberate over their future actions.

Recent neuroscience research found that at these 'choice points' rats and other vertebrates activate regions of their hippocampus that appear to simulate choices and their potential outcomes.

Professor Hills and Professor Stephen Butterfill, from Warwick's Department of Philosophy, created different descriptive models to explain the process behind the rat's deliberation at the 'choice points'.

One model, the Naive Model, assumed that animals inhibit action during simulation. However, this model created false memories because the animal would be unable to tell the differences between real and imagined actions.

A second, the Self-actuating Model, was able to solve this problem by 'tagging' real versus imagined experience. Hills and Butterfill called this tagging the 'primal self.'

Commenting on the finding the Professor Hills, said:

"The study answers a very old question: do animals have a sense of self? Our first aim was to understand the recent neural evidence that can project themselves into the future. What we wound up understanding is that, in order to do so, they must have a primal ."

"As such, humans must not be the only animal capable of . Indeed, the answer we are led to is that anything, even robots, that can adaptively imagine themselves doing what they have not yet done, must be able to separate the knower from the known."

The study, From foraging to autonoetic consciousness: The primal self as a consequence of embodied prospective foraging, is published by Current Zoology.

Explore further: With geomagnetic compass hooked to the brain, blind rats act like they can see

More information: "From foraging to autonoetic consciousness: The primal self as a consequence of embodied prospective foraging", Current Zoology 61 (2): 368 – 381, 2015

Related Stories

Recommended for you

Humans artificially drive evolution of new species

June 28, 2016

Species across the world are rapidly going extinct due to human activities, but humans are also causing rapid evolution and the emergence of new species. A new study published today summarises the causes of manmade speciation, ...

Baby fish lose poisonous protectors in acidified oceans

June 28, 2016

A common close partnership which sees baby fish sheltering from predators among the poisonous tentacles of jellyfish will be harmed under predicted ocean acidification, a new University of Adelaide study has found.

13 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
5 / 5 (1) Jun 15, 2015
Indeed, the answer we are led to is that anything, even robots, that can adaptively imagine themselves doing what they have not yet done, must be able to separate the knower from the known."


The idea seems to be that the animal/robot stops and has a short dream where it simulates future actions as if they were real, which should then create false memories if the animal weren't self-aware.

But what if that's the case?

In control theory it's common to run a parallel simulation about the system to predict the future and adjust the feedback loops accordingly. This simulation runs continuously predicting possible futures, and simply puts a "pressure" on the actual control system. At every run, it simply forgets the previous prediction and makes a new one.

In other words, if experience is memory, then it always knows just one experience - the future that it's predicting as if it had already happened, which is changing continuously depending on what does happen.
Eikka
2.3 / 5 (3) Jun 15, 2015
What I mean is, the system has a history, and it can "know" the future, and the future it predicts always results from the history it remembers.

So it actually knows a single reality, history + simulated future, as if it were already all true.

The actual control feedback loops simply follow the state of the system at the point where the history changes into future to derive their control values.

In humans as well, it's been pointed out that we percieve ourselves making a decision only after we actually make it. Ie. we only realize we've decided to move our hand when we actually move it, when the action enters our "history" as a measured fact.

This realization is what we understand as self-awareness. It lags behind the actual decision to move the hand, so the decision must have been made without self-awareness.
RobertKarlStonjek
not rated yet Jun 15, 2015
The difference between humans and other mammals is the ability to model environments and the self outside the immediate behavioural episode eg tomorrow.

Modelling choices in the current behavioural episode (like navigating a maze and then pausing) only requires modelling the current visible or adjacent environment with alternative scenarios. Modelling 'tomorrow' requires simulation of the entire environment and the self in a completely different condition eg imagining running the maze tomorrow while resting in the cage.

The simplest form of self is considered to be boundary recognition whereby an animal is not tempted to eat its own limbs. Some insects, for instance, do not appear to have this ability.
Eikka
not rated yet Jun 16, 2015

The simplest form of self is considered to be boundary recognition whereby an animal is not tempted to eat its own limbs. Some insects, for instance, do not appear to have this ability.


Some domestic animals don't appear to have it either, or at least not all of the time.

You can see cats trying to scratch themselves with amputated limbs without learning that the limb simply isn't there, or attacking their own hind-legs or tails and only stopping because it hurts when they chomp down on it.
anywallsocket
not rated yet Jun 17, 2015
Before anyone conflates human's self-aware-self-aware-self-aware-self...etc, etc, with that of other organism's self-awareness, i'll go ahead and suggest that it's essentially an inevitable feature of any organism, down to some of the simplest.

As Douglass Hofstadter explains better than I can in GEB (388):
"All the stimuli coming into the system are centered on one small mass in space. It would be quite a glaring hole in a brain's symbolic structure not to have a symbol for the physical object in which it is housed, and which plays a larger role in the events it mirrors than any other object. In fact, upon reflection, it seems that the only way one could make sense of the world surrounding a localized animate object is to understand the role of that object in relation to the other objects around it."
Torbjorn_Larsson_OM
5 / 5 (1) Jun 17, 2015
@Eikka: "This simulation runs continuously predicting possible futures, and simply puts a "pressure" on the actual control system. At every run, it simply forgets the previous prediction and makes a new one."

It is a philosophical paper, so you shouldn't expect much veracity.

They do have an empirical point though, sleep has a mechanism that makes the consciousness aware of the difference between dreaming (simulation to integrate access to suitable behavioral memories, most likely) and self. The brain runs the same 'self' mapping neurons.

According to the only biologically motivated theory of consciousness I know of, the attention focusing process that can focus attention on where we focus other attention ('being aware of being aware') - which has an identifiable brain template - goes at least back to monkeys. And no, it has nothing to do with Hofstadter's philosophical blather, except that it iterates the same.

[tbctd]
Torbjorn_Larsson_OM
5 / 5 (1) Jun 17, 2015
[ctd]

I wouldn't be surprised if there are deep homologies for these areas. Sleep would overrule that system, which is where this idea of need of a self 'mapping' neurons should come in.

But I also wouldn't be surprised if this idea is a dud, when it gets around to actual experiments. It is, after all, philosophy of the Hofstadter inane type. [Ref: no valid science has come out of his massive tome, unless I am mistaken.] :-/
Eikka
1 / 5 (1) Jun 17, 2015
sleep has a mechanism that makes the consciousness aware of the difference


Yes, and it's amnesia. When you sleep, your body paralyzes itself to prevent you from acting out the dream, and when you wake up all memory of the dream is erased because the brain suppresses long term memory formation.

The conciousness essentialy is NOT aware that it's dreaming. There are people for whom the paralysis doesn't work, and they jump around fighting monsters and arguing with imaginary people all night long as if they were real. Eyes wide open, walking, talking, and in the morning they know nothing of it.

When you wake up, that's the point where you realize that you were dreaming - if you still remember what you were dreaming of. It fades in 2-3 minutes.

When it is actually happening, the brain makes no distinction that it's not real.
Eikka
not rated yet Jun 17, 2015
"All the stimuli coming into the system are centered on one small mass in space. It would be quite a glaring hole in a brain's symbolic structure not to have a symbol for the physical object in which it is housed


That's begging the question that the brain is treating the stimuli as symbols, like a computer would. That supposes a separation between the stimuli and the actual function of the brain, as if there was a barrier between the senses and the operating brain, like in the Chinese Room where symbols on paper are passed through a slit in a wall for the "brain" to operate on.

The other alternative is that the stimuli are themselves part of the operation of the brain, like a key is an essential part of the mechanism of a lock. The key is not a symbol - it is what actually turns the lock. Likewise, the electrical impulses of sensory data are not mere symbols - they are what actually operate the brain.
Vietvet
5 / 5 (1) Jun 17, 2015
@Eikka

You must not be aware of lucid dreaming. Having experienced it frequently I can attest to it being a real phenomenon

.https://scholar.g...as_sdtp=

Eikka
not rated yet Jun 17, 2015
So, if the brain doesn't operate by manipulating symbols, but rather by being manipulated by the symbols, then the idea that a brain would have a symbol of itself is unnecessary and meaningless.

Rather, the idea of self-awareness is better described as a feedback system where the brain recieves a signal of its own operation with a delay. The brain sees the hand move, which creates the sensation "I move my hand", or just "a hand is moving", depending on how the brain interacts with that information - whether it is self-aware.

It's not necessarily so.

You must not be aware of lucid dreaming. Having experienced it frequently I can attest to it being a real phenomenon


It is, but it's not what normally happens.

People can become aware that they're dreaming while they're dreaming, but most people just don't, and even lucid dreamers aren't aware of all their dreams, which there are dozens per night.
Eikka
not rated yet Jun 17, 2015
The point of lucid dreaming actually confirms the point: you have to realize that you're dreaming in order to have one.

A normal dream becomes tagged as a dream only after you've woken up from it, when you piece the story together from what scraps you have in your short term memory. The memory you have conflicts with what you know happened, which is that you slept in your bed for 8 hours and couldn't have been fighting dragons. Then five minutes later you forget that you even had a dream unless you make a point of remembering it.

If the mouse dreams the future when it's making a decision, it only needs to do so in short term memory. When it wakes up from this daydream, it finds itself with conflicting information from the false memory, but that doesn't matter because the false memory fades away, and even if it didn't, it's going to perform the action anyways and it becomes a true memory.

mrburns
1 / 5 (1) Jun 20, 2015
Compuers simulate simulate environments too so these bozos think computers are self aware. These so-called philosophers have defined a multitude of creatures and machines which are manifestly NOT self-aware to be self-aware. Poor definitions and sloppy logic gives such obviously false results. In the old days when people were ashamed of being idiots they kept their more ridiculous theories to themselves. I miss those days.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.