The emergence of complex behaviors through causal entropic forces

Apr 22, 2013 by John Hewitt report
A modified version of thermodynamics causes a pendulum (green) swinging from a sliding pivot (red) to stabilize in an inverted, and normally unstable, configuration, from which it has a greater variety of options for its future motion. Credit: Physics Focus / A. Wissner-Gross/Harvard Univ. & MIT

(Phys.org) —An ambitious new paper published in Physical Review Letters seeks to describe intelligence as a fundamentally thermodynamic process. The authors made an appeal to entropy to inspire a new formalism that has shown remarkable predictive power. To illustrate their principles they developed software called Entropica, which when applied to a broad class of rudimentary examples, efficiently leads to unexpectedly complex behaviors. By extending traditional definitions of entropic force, they demonstrate its influence on simulated examples of tool use, cooperation, and even stabilizing an upright pendulum.

The familiar concept of entropy which states that systems are biased to evolve towards greater disorder, gives little indication about exactly how they evolve. Recently, have begun to explore the idea that proceeding in a direction of maximum instantaneous entropy production is only one among many ways to go. More generally, the authors now suggest that systems which show intelligence, uniformly maximize the total entropy produced over their entire path through configuration space between the present time and some future time.

In accordance with Fermat's original principle, for the simple case where light travels in a constant medium, the path which minimizes time is a . If however, the second point under consideration is within a different medium, the partitions the time spent in either according their refractive indexes. By analogy to Fermat, this new and more general view of thermodynamic systems, looks at the total path, rather than just the current state of the system.

The first author on the paper, Alex Wissner-Gross, describes intelligent behavior as a way to maximize the capture of possible future histories of a particular system. Starting from a formalism known as the 'canonical ensemble' (which is basically a probability distribution of states) the authors ultimately derive a measure they call causal entropic forcing. When following a causal path, is based not on the internal arrangements accessible to a system at any particular time, but rather on the number of arrangements it could pass through on the way to possible future states.

This video is not supported by your browser at this time.
Does Cosmology Hint At How To Build Artificial Minds?

In a practical simulation of a particle in a box, for example, the effect of causal entropic forcing is to keep the particle in a relatively central location. This effect can be understood as the system maximizing the diversity of causal paths that would be accessible by Brownian motion within the box. The authors also simulated different sized disks diffusing in a 2D geometry. With application of the causal forcing function, the system rapidly produced behaviors were larger disks "used" smaller disks to release other disks from trapped locations. In different scenarios of this general paradigm, disks cooperated together to achieve seemingly improbable results.

Many of these kinds of behaviors might also be compared to activities we now know exist in the normal biochemical operations of cells. For example, enzymes use complex changes in conformation, and various small cofactors, to manipulate proteins and other molecules. The nucleus extrudes mRNAs through pores against entropic forces which tend to hold the polymer coiled up in the the interior. The speed and efficiency at which machines like ribosomes and polymeraces operate, suggests that effects other than just pure Brownian motion are responsible for delivery of their substrates and subsequently binding them with the selectivity that is observed.

The biggest imaginative leap of the paper involved simulating a rigid, inverted stabilized by a translating cart. The authors suggest that when operating under a causal forcing function, this systems bears rudimentary resemblance to achieving upright walking. While this example may appear more relevant to finding new ways to program walking robots, as opposed to understanding the transition to walking in hominids, the overall diversity of behaviors modeled with this formalism does not fail to impress.

The spontaneous emergence of complex behaviors now has a new tool which can be used to probe its possible origins. New methods of solving traditional challenges in artificial intelligence may also be investigated. Programming machines to play games like GO, where humans still appear to have the edge might also be make use of these methods. The Entropica simulation software is available in demo form from the authors website, as are other materials related to their new paper.

Explore further: Serial time-encoded amplified microscopy for ultrafast imaging based on multi-wavelength laser

More information: Causal Entropic Forces, Phys. Rev. Lett. 110, 168702 (2013) prl.aps.org/abstract/PRL/v110/i16/e168702

Abstract
Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human "cognitive niche"—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.

Related Stories

Prediction or cause? Information theory may hold the key

Sep 30, 2011

(PhysOrg.com) -- "A perplexing philosophical issue in science is the question of anticipation, or prediction, versus causality," Shawn Pethel tells PhysOrg.com. "Can you tell the difference between something predicting an eve ...

Researchers unlock mystery of how 'handedness' arises

May 08, 2012

The overwhelming majority of proteins and other functional molecules in our bodies display a striking molecular characteristic: They can exist in two distinct forms that are mirror images of each other, like ...

Ants follow Fermat's principle of least time

Apr 01, 2013

(Phys.org) —Ants have long been known to choose the shortest of several routes to a food source, but what happens when the shortest route is not the fastest? This situation can occur, for example, when ...

Disorder is key to nanotube mystery

Aug 12, 2011

Scientists often find strange and unexpected things when they look at materials at the nanoscale -- the level of single atoms and molecules. This holds true even for the most common materials, such as water.

Recommended for you

A transistor-like amplifier for single photons

8 hours ago

Data transmission over long distances usually utilizes optical techniques via glass fibres – this ensures high speed transmission combined with low power dissipation of the signal. For quite some years ...

User comments : 21

Adjust slider to filter visible comments by rank

Display comments: newest first

Kirk1_0
not rated yet Apr 22, 2013
This is very interesting. More details about the model itself would be appreciated though.
Perhaps application towards RNA/DNA/protein synthesis can shed light on the creation of the first life form.
xeb
2 / 5 (4) Apr 22, 2013
I suppose, that just as 95% of studies with keywords like "complexity", "emergence", "evolution of order", "self-organization", this one is also blind to the range of dependecy of complex systems from their environment. To the fuzzines of their borders - how many external systems superimpose within given local volume, on different levels, with different efficiency bonds. Self-organization is a fiction. Eco-self-organization (or self-eco-organization) is a fact. It is an evolution of nested multilevel niches of countles systems, inter-systems, sub-systems not the evolution of separate systems. So a bit mysterious source of "more inteligent phase-space managament" that Entropica's algorithms partially imitate will always be trackable only within environmental-scale micro-level probabilities. It is global civilization (humans+their tools+resources+...) that feeds by decreasing energetic potentials - individual humans are just replaceable carriers.
:)
antialias_physorg
4 / 5 (4) Apr 22, 2013
If anyone is interested, here's the link to the full paper:
http://www.alexwg...8702.pdf
beleg
1 / 5 (2) Apr 22, 2013
Zero entropy has nothing to protect. Neither symmetry nor order. The information is all there.
The are still mathematical foundations to be build. For example for topological order.
grondilu
not rated yet Apr 22, 2013
This is almost scary.
antialias_physorg
3 / 5 (2) Apr 23, 2013
This is almost scary.

Why? What is scary about (possibly) getting a grip on what intelligence is (and it, again possibly, being rather simple)?
Think of the possibilities. If it's easy to make intelligent choices then it may be easier than we thought to construct artificial intelligence.

Sure, it's another (and final) blow to our belief in 'human superiority' and uniqueness...but so what?
Doug_Huffman
not rated yet Apr 23, 2013
Thanks for the paper's link URL. E. T. Jaynes' stature grows!
JVK
2 / 5 (4) Apr 23, 2013
Sure, it's another (and final) blow to our belief in 'human superiority' and uniqueness...but so what?


See for comparison: Nutrient-dependent / Pheromone-controlled thermodynamics and thermoregulation, which represents adaptively evolved ecological, social, neurogenic, and socio-cognitive niche construction sans causal entropic forces. http://dx.doi.org...e.643393

Thus, complex behaviors are adaptively evolved sans mutations theory as modeled in Nutrient-dependent / Pheromone-controlled Adaptive Evolution. http://dx.doi.org...e.155672
drhoo
5 / 5 (1) Apr 23, 2013
Life reverses the course of entropy.
antialias_physorg
3 / 5 (2) Apr 23, 2013
Life reverses the course of entropy.

Whatever gave you that idea?
drhoo
not rated yet Apr 23, 2013
It seems life forms from disordered material into something more ordered, and yes i don't know thermo at all its just a thing to ponder whilst sipping a pint.
gavin_p_taylor
1 / 5 (1) Apr 23, 2013
This is still nowhere near general intelligence, because it has no purpose. Human intelligence for example is something rich which first must be DEFINED. So this isn't general intelligence, it's slapping an algorithm onto the use-case. Although intelligence in general including our very specific brand of human intelligence is invariably built on such thermodynamic and so on systems. But this isn't more fundamental than the stuff I think about.

AI is still a very stagnated field because of MODERN ECONOMICS AND ITS PROPREITARISM causing software in general to be horrifically inaccessible to the gamers/users that could be adding to general functionality (i.e. building a central AI definition of human meaning!) every day; not because of some magic god algorithm that will save our lazy asses from the real work-- again; this needs stressing. AI won't just compute itself, if it does it'll be here in far longer than 50 years so good luck with ever seeing it.

Other than that, very interesting
gavin_p_taylor
1 / 5 (1) Apr 23, 2013
General intelligence must be purely abstract and free of physical, planetary-atmospheric-specific models. You should know this if you've thought about computing/AI. Also, a true theory of entropy would have to account for the thoughts we have in creating the theory of entropy ;)
event
not rated yet Apr 23, 2013
It seems life forms from disordered material into something more ordered, and yes i don't know thermo at all its just a thing to ponder whilst sipping a pint.

I can see what you mean. The more generalized view is that a disordered system with high entropy can be (locally) reversed (ie, become more ordered, less disorganized) by the input of energy. So, in order to tidy up your disordered room, you must expend energy to overcome entropy.

But energy can come from many sources, not just through living agents. The formation of those living agents also required an input of energy.
antialias_physorg
3 / 5 (2) Apr 24, 2013
and yes i don't know thermo at all

Well, then have a quick sashay over to wikipedia or a textbook. It's not all that hard (and it will quickly show you that your idea about life is wrong from the get-go).
Any time you have spent pondering this was pretty much wasted and could have been saved by a 20 second google.

General intelligence must be purely abstract and free of physical, planetary-atmospheric-specific models.

Which this is, if you had read the paper. The examples they show are just random applications of the principle to show that it ISN'T limited to a specific model.

Also, a true theory of entropy would have to account for the thoughts we have in creating the theory of entropy

Why? What's so different about the thermodynamics of a self referential thought as opposed to a non-self referential one?
gavin_p_taylor
1 / 5 (1) Apr 29, 2013
>Which this is, if you had read the paper. The examples they show are just random applications of the principle to show that it ISN'T limited to a specific model.

It's an algorithm. Our physical reality is far more complex, and has effects of its own acting on our 'environment' that are ongoing and very specific to our universe and, more specifically, our planet's atmosphere. To begin talking about general intelligence you have to first define what frame of reference by which you are even defining 'abstract'. To introduce an algorithm on a few pages and say "boom- intelligence!" doesn't explain what is inevitably a set/subset of complex systems interacting ("the universe"). Intelligence is everywhere, effectively, and the observer selectively attributes its meaning.

>Why? What's so different about the thermodynamics of a self referential thought as opposed to a non-self referential one?

Complexity of its meaning. I guess science can't go broader than traditional media.
gavin_p_taylor
1 / 5 (1) Apr 29, 2013
If our society was serious about understanding very complex things of this nature, then we'd have more people as scientists, for a start. And for that matter, as Near-Term Extinction is here, I don't see why all fields of science shouldn't turn their focus onto the current converging crises of the world- economic collapse; climate change; lack of critical thinking.

http://guymcphers...-update/
antialias_physorg
not rated yet Apr 30, 2013
If our society was serious about understanding very complex things of this nature, then we'd have more people as scientists, for a start.

You can't just arbitrarily up the numberof scientists.
1) Being a scientists requires the sort of brain power very few have
2) Science has to be funded without knowing whether it'll pan out, since you're always totally fishing in the unknown. There must be a solid econmic basis for us to be able to afford doing science.

Complexity of its meaning.

So? What has hat got to do with thermodynamics?
A landslide is more complex than a single stone falling downhill. From a thermodynamic standpoint there is no QUALITATIVE difference (only a quantitative one).

To introduce an algorithm on a few pages and say "boom- intelligence!"

Why not? Einstein's paper on realtivity isn't any longer (The E equals m c squared paper is 3 pages long). And look where it lead us.
gavin_p_taylor
1 / 5 (1) May 01, 2013
>From a thermodynamic standpoint there is no QUALITATIVE difference (only a quantitative one).
>Why not? Einstein's paper on realtivity isn't any longer (The E equals m c squared paper is 3 pages long). And look where it lead us.

Relativity is a fundamental aspect of intelligence/experience, whereas, intelligence itself as an entirely imaginary (and individual!) human construct cannot be defined like that, because it's entirely subjective (as well as viewed as "relating to the universe"), which means it requires an observer to be defined, and like meaning it cannot be quantified (in a broad/absolute sense), because meaning doesn't inherently exist like laws of the universe do, and thus you have to first DEFINE (in rich detail) what the observer is and how he functions, to express something there, because it is context contingent. "There is more wisdom in your body than in your deepest philosophy." - Nietzsche

similarly; watch up to 5 min in - http://www.youtub...xnqGJLeu
vindemiatrix
not rated yet May 14, 2013
The presented algorithm seem to represent also general statistical behaviour,
which does not exclude unknown events, but purely associates low probability with them.
As raindrops usually move towards earth, except on the front window of a driving car.
During my practical training in a workshop it happened to me, that a hammer fell from my work bench. It bounced once and finally stood upright on its handle on the floor.
I did not experience a repetition the last 54 years.
Our physical laws are derived from observing most probable incidents, which are then described as logical. But with huge time frames, events and processes with low probability
will occur as evolution of life and intelligence. Math teacher.
grondilu
not rated yet Jun 01, 2013
I would like to see a proof of this concept with a game learning situation. Chess or an other boardgame. The rules would not be coded in anyway whatsoever. The machine would just lose the game if he plays a wrong move, and immediately the board would be put to the initial position.
Hopefully, the machine would learn the moves by experience and if the principles of this entropica software is right, I guess it should do its best to win.