(Phys.org) —An ambitious new paper published in Physical Review Letters seeks to describe intelligence as a fundamentally thermodynamic process. The authors made an appeal to entropy to inspire a new formalism that has shown remarkable predictive power. To illustrate their principles they developed software called Entropica, which when applied to a broad class of rudimentary examples, efficiently leads to unexpectedly complex behaviors. By extending traditional definitions of entropic force, they demonstrate its influence on simulated examples of tool use, cooperation, and even stabilizing an upright pendulum.
The familiar concept of entropy which states that systems are biased to evolve towards greater disorder, gives little indication about exactly how they evolve. Recently, physicists have begun to explore the idea that proceeding in a direction of maximum instantaneous entropy production is only one among many ways to go. More generally, the authors now suggest that systems which show intelligence, uniformly maximize the total entropy produced over their entire path through configuration space between the present time and some future time.
In accordance with Fermat's original principle, for the simple case where light travels in a constant medium, the path which minimizes time is a straight line. If however, the second point under consideration is within a different medium, the shortest path partitions the time spent in either according their refractive indexes. By analogy to Fermat, this new and more general view of thermodynamic systems, looks at the total path, rather than just the current state of the system.
The first author on the paper, Alex Wissner-Gross, describes intelligent behavior as a way to maximize the capture of possible future histories of a particular system. Starting from a formalism known as the 'canonical ensemble' (which is basically a probability distribution of states) the authors ultimately derive a measure they call causal entropic forcing. When following a causal path, entropy is based not on the internal arrangements accessible to a system at any particular time, but rather on the number of arrangements it could pass through on the way to possible future states.
In a practical simulation of a particle in a box, for example, the effect of causal entropic forcing is to keep the particle in a relatively central location. This effect can be understood as the system maximizing the diversity of causal paths that would be accessible by Brownian motion within the box. The authors also simulated different sized disks diffusing in a 2D geometry. With application of the causal forcing function, the system rapidly produced behaviors were larger disks "used" smaller disks to release other disks from trapped locations. In different scenarios of this general paradigm, disks cooperated together to achieve seemingly improbable results.
Many of these kinds of behaviors might also be compared to activities we now know exist in the normal biochemical operations of cells. For example, enzymes use complex changes in conformation, and various small cofactors, to manipulate proteins and other molecules. The nucleus extrudes mRNAs through pores against entropic forces which tend to hold the polymer coiled up in the the interior. The speed and efficiency at which machines like ribosomes and polymeraces operate, suggests that effects other than just pure Brownian motion are responsible for delivery of their substrates and subsequently binding them with the selectivity that is observed.
The biggest imaginative leap of the paper involved simulating a rigid, inverted pendulum stabilized by a translating cart. The authors suggest that when operating under a causal forcing function, this systems bears rudimentary resemblance to achieving upright walking. While this example may appear more relevant to finding new ways to program walking robots, as opposed to understanding the transition to walking in hominids, the overall diversity of behaviors modeled with this formalism does not fail to impress.
The spontaneous emergence of complex behaviors now has a new tool which can be used to probe its possible origins. New methods of solving traditional challenges in artificial intelligence may also be investigated. Programming machines to play games like GO, where humans still appear to have the edge might also be make use of these methods. The Entropica simulation software is available in demo form from the authors website, as are other materials related to their new paper.
Explore further:
Measuring the unseeable: Researchers probe proteins' 'dark energy'
More information: Causal Entropic Forces, Phys. Rev. Lett. 110, 168702 (2013) prl.aps.org/abstract/PRL/v110/i16/e168702
Abstract
Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human "cognitive niche"—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.

Kirk1_0
not rated yet Apr 22, 2013Perhaps application towards RNA/DNA/protein synthesis can shed light on the creation of the first life form.
xeb
2 / 5 (4) Apr 22, 2013:)
antialias_physorg
4 / 5 (4) Apr 22, 2013http://www.alexwg...8702.pdf
beleg
1 / 5 (2) Apr 22, 2013The are still mathematical foundations to be build. For example for topological order.
grondilu
not rated yet Apr 22, 2013antialias_physorg
3 / 5 (2) Apr 23, 2013Why? What is scary about (possibly) getting a grip on what intelligence is (and it, again possibly, being rather simple)?
Think of the possibilities. If it's easy to make intelligent choices then it may be easier than we thought to construct artificial intelligence.
Sure, it's another (and final) blow to our belief in 'human superiority' and uniqueness...but so what?
Doug_Huffman
not rated yet Apr 23, 2013JVK
2 / 5 (4) Apr 23, 2013See for comparison: Nutrient-dependent / Pheromone-controlled thermodynamics and thermoregulation, which represents adaptively evolved ecological, social, neurogenic, and socio-cognitive niche construction sans causal entropic forces. http://dx.doi.org...e.643393
Thus, complex behaviors are adaptively evolved sans mutations theory as modeled in Nutrient-dependent / Pheromone-controlled Adaptive Evolution. http://dx.doi.org...e.155672
drhoo
5 / 5 (1) Apr 23, 2013antialias_physorg
3 / 5 (2) Apr 23, 2013Whatever gave you that idea?
drhoo
not rated yet Apr 23, 2013gavin_p_taylor
1 / 5 (1) Apr 23, 2013AI is still a very stagnated field because of MODERN ECONOMICS AND ITS PROPREITARISM causing software in general to be horrifically inaccessible to the gamers/users that could be adding to general functionality (i.e. building a central AI definition of human meaning!) every day; not because of some magic god algorithm that will save our lazy asses from the real work-- again; this needs stressing. AI won't just compute itself, if it does it'll be here in far longer than 50 years so good luck with ever seeing it.
Other than that, very interesting
gavin_p_taylor
1 / 5 (1) Apr 23, 2013event
not rated yet Apr 23, 2013I can see what you mean. The more generalized view is that a disordered system with high entropy can be (locally) reversed (ie, become more ordered, less disorganized) by the input of energy. So, in order to tidy up your disordered room, you must expend energy to overcome entropy.
But energy can come from many sources, not just through living agents. The formation of those living agents also required an input of energy.
antialias_physorg
3 / 5 (2) Apr 24, 2013Well, then have a quick sashay over to wikipedia or a textbook. It's not all that hard (and it will quickly show you that your idea about life is wrong from the get-go).
Any time you have spent pondering this was pretty much wasted and could have been saved by a 20 second google.
Which this is, if you had read the paper. The examples they show are just random applications of the principle to show that it ISN'T limited to a specific model.
Why? What's so different about the thermodynamics of a self referential thought as opposed to a non-self referential one?
gavin_p_taylor
1 / 5 (1) Apr 29, 2013It's an algorithm. Our physical reality is far more complex, and has effects of its own acting on our 'environment' that are ongoing and very specific to our universe and, more specifically, our planet's atmosphere. To begin talking about general intelligence you have to first define what frame of reference by which you are even defining 'abstract'. To introduce an algorithm on a few pages and say "boom- intelligence!" doesn't explain what is inevitably a set/subset of complex systems interacting ("the universe"). Intelligence is everywhere, effectively, and the observer selectively attributes its meaning.
>Why? What's so different about the thermodynamics of a self referential thought as opposed to a non-self referential one?
Complexity of its meaning. I guess science can't go broader than traditional media.
gavin_p_taylor
1 / 5 (1) Apr 29, 2013http://guymcphers...-update/
antialias_physorg
not rated yet Apr 30, 2013You can't just arbitrarily up the numberof scientists.
1) Being a scientists requires the sort of brain power very few have
2) Science has to be funded without knowing whether it'll pan out, since you're always totally fishing in the unknown. There must be a solid econmic basis for us to be able to afford doing science.
So? What has hat got to do with thermodynamics?
A landslide is more complex than a single stone falling downhill. From a thermodynamic standpoint there is no QUALITATIVE difference (only a quantitative one).
Why not? Einstein's paper on realtivity isn't any longer (The E equals m c squared paper is 3 pages long). And look where it lead us.
gavin_p_taylor
1 / 5 (1) May 01, 2013>Why not? Einstein's paper on realtivity isn't any longer (The E equals m c squared paper is 3 pages long). And look where it lead us.
Relativity is a fundamental aspect of intelligence/experience, whereas, intelligence itself as an entirely imaginary (and individual!) human construct cannot be defined like that, because it's entirely subjective (as well as viewed as "relating to the universe"), which means it requires an observer to be defined, and like meaning it cannot be quantified (in a broad/absolute sense), because meaning doesn't inherently exist like laws of the universe do, and thus you have to first DEFINE (in rich detail) what the observer is and how he functions, to express something there, because it is context contingent. "There is more wisdom in your body than in your deepest philosophy." - Nietzsche
similarly; watch up to 5 min in - http://www.youtub...xnqGJLeu
vindemiatrix
not rated yet May 14, 2013which does not exclude unknown events, but purely associates low probability with them.
As raindrops usually move towards earth, except on the front window of a driving car.
During my practical training in a workshop it happened to me, that a hammer fell from my work bench. It bounced once and finally stood upright on its handle on the floor.
I did not experience a repetition the last 54 years.
Our physical laws are derived from observing most probable incidents, which are then described as logical. But with huge time frames, events and processes with low probability
will occur as evolution of life and intelligence. Math teacher.
grondilu
not rated yet Jun 01, 2013Hopefully, the machine would learn the moves by experience and if the principles of this entropica software is right, I guess it should do its best to win.