Spaun, the new human brain simulator, can carry out tasks (w/ video)

Nov 30, 2012 by Lin Edwards report
A high-level depiction of the Spaun model, with all of the central features of a general Semantic Pointer Architecture.

(Phys.org)—One of the challenges of understanding the complex behavior of animals is to relate the behavior to the complex processes occurring within the brain. So far, neural models have not been able to bridge this gap, but a new software model, Spaun, goes some way to addressing this problem.

The Semantic Pointer Architecture Unified Network (Spaun) is a of the human brain built by Professor Chris Eliasmith and colleagues of the University of Waterloo in Canada. It comprises around two and a half million virtual neurons organized into rather like real neurons in regions of the human brain associated with vision, short-term memory, and so on. (The human brain has roughly 100 billion neurons.)

Spaun is presented with a sequence of in eight separate tasks. It then processes the information presented to it and then decides what action to take. It can recognize and remember numbers written in different , and can copy them using a . Spaun can also answer questions about numbers and complete number series after seeing examples.

This video is not supported by your browser at this time.
This video provides a brief introduction to Spaun, showing the model's 'neural' activity superimposed onto an illustration of a human brain. Credit: Chris Eliasmith, et al.

These tasks are simple, but they capture many features of and physiology, including abilities to perceive, recognize and carry out required behaviors. They also require an enormous amount of , with the computer needing two hours of processing time for each second of Spaun simulation.

The most surprising feature about Spaun, according to Prof. Eliasmith and colleagues is that it has human-like flaws. For example, it has trouble remembering lists of numbers when they are too lengthy, and is better at remembering numbers at the beginnings and ends of lists. It also hesitates before answering questions, just as humans do. These flaws may be useful in future robots, Eliasmith said, as they would make robots seem more human-like and therefore easier to interact with.

This video is not supported by your browser at this time.
This video shows Spaun's 'neural' activity during four different tasks and explains how the researchers decoded it. Credit: Chris Eliasmith, et al.

Eliasmith said Spaun is the first simulator of the brain to be able to complete a series of tasks and demonstrate behaviors, even though bigger brain models have been built in the past, such as that built by the Blue Brain Project, with a million neurons, and SyNAPSE (IBM) with a billion simulated neurons.

Spaun is more similar to the human brain than previous models, Eliasmith said, and it could therefore be used in the study of some brain disorders. In a recent experiment, for example, he examined the effects of neurons "dying off," in a simulation of aging of the human brain; an experiment that would have been unethical if using human subjects. The model might also be useful in the development of multi-task artificial intelligence applications, and robotics.

Spaun does have its limitations because at present it can only carry out the tasks given it and cannot learn anything new, and because of its lengthy computer processing time.

The research paper was published in the journal Science.

Explore further: Researchers developing algorithms to detect fake reviews

More information: A Large-Scale Model of the Functioning Brain, Science 30 November 2012: Vol. 338 no. 6111 pp. 1202-1205. DOI: 10.1126/science.1225266

ABSTRACT
A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called "Spaun") that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.

Download the Spaun model
Videos for Spaun simulations
Press release

Related Stories

Learning about brains from computers, and vice versa

Feb 15, 2008

For many years, Tomaso Poggio’s lab at MIT ran two parallel lines of research. Some projects were aimed at understanding how the brain works, using complex computational models. Others were aimed at improving the abilities ...

Wired for avalanches -- and learning

May 02, 2012

The brain's neurons are coupled together into vast and complex networks called circuits. Yet despite their complexity, these circuits are capable of displaying striking examples of collective behavior such as the phenomenon ...

Recommended for you

Apple sees iCloud attacks; China hack reported

2 hours ago

Apple said Tuesday its iCloud server has been the target of "intermittent" attacks, hours after a security blog said Chinese authorities had been trying to hack into the system.

HP supercomputer at NREL garners top honor

4 hours ago

A supercomputer created by Hewlett-Packard (HP) and the Energy Department's National Renewable Energy Laboratory (NREL) that uses warm water to cool its servers, and then re-uses that water to heat its building, has been ...

User comments : 15

Adjust slider to filter visible comments by rank

Display comments: newest first

Tausch
1 / 5 (1) Nov 30, 2012
It also hesitates before answering questions, just as humans do.

This is one of many hall marks of acquired human language.
As if the contents of meaning correlates to the length of the processes.
Kudos.
TheKnowItAll
1 / 5 (1) Nov 30, 2012
Not enough RAM :(
pauljpease
not rated yet Nov 30, 2012
Pretty cool stuff. Can't wait for the full-scale real-time simulation. I was wondering how far off that might be. This simulation was 2.5 million neurons (1/40,000th of the total neurons in a brain) and it operates at 1/7200th the speed of a real brain. So we need computational performance roughly 288 million times what was done in this experiment (assuming no major nonlinear effects as the simulation is scaled up). Assuming they aren't doing this on the world's best supercomputer, maybe they are using something that is a few hundred times less powerful than what the maximum with today's computers could do given the resources. That leaves a factor of a million left to go. If it ever happens, it won't be as cheap as getting your girlfriend knocked up...
El_Nose
not rated yet Nov 30, 2012
no it has plenty of RAM -- it need more hard drive space -- working memory vs long term storage of concepts
Intensero
not rated yet Nov 30, 2012
I love this. So many possibilities. Keep up the good work researchers and think about scaling this up with a stronger supercomputer. Learning AI, massive storage, and super computing speeds will make this sort of innovation vital to so many applications.
MrVibrating
not rated yet Nov 30, 2012
@pauli

20 years ago we had 20Mb hard drives. Today we have 4Tb units, 20,000x the density. At this rate, in another 20 years that'll be 80 Petabytes. Another 10 yrs, and home PCs will be into exabyte territory. Of course, this neglects HDD obsolescence; by then, SSD densities could be benefiting from the kinds of single-atom transistors reported here recently...

RAM has seen similar progress, esp. in value - 30 years ago 16k cost £60: adjusted for 4x inflation = £240 in today's cash, enough for 128Gb of DDR3, an 8 million-fold price drop!

CPU speeds are up 1,000x from 4mhz in the 80s to 4ghz today.

Extrapolating, in 30yrs time the £120 needed for six Bacardi Breezers COULD buy you over 60 Petabyes, enough for 24 human-like AIs, or one super-human AI with an IQ of 40,000...

Besides, we'll probably see useful, strong AI before we manage full human-like sentience - idiot savants that'll outclass any human at their dedicated tasks, while remaining clueless in every other respect.
lengould100
not rated yet Dec 01, 2012
I recommend Steven Pinker's "The Blank Slate" for clues.
ShotmanMaslo
not rated yet Dec 01, 2012
But.. ..muh qualia..

Wow, this is cool, I did not expect we could do brain simulation, even if crude, in the present. Wiki has an interesting overview of computational capacity required for brain simulation at various levels of detail:

http://en.wikiped...capacity
Eikka
2.5 / 5 (4) Dec 01, 2012
MrVibrating:

CPU clock frequencies and RAM access latencies have been stuck for the last 5-10 years. Moore's law "continues" because it has been re-defined to include parallel processing, which in some senses is like saying that two cars go twice as fast as one car.

It's not actually that simple, because the actual processing speed is limited to the speed you can run the linear portion of the task, and adding too many processors gives you communications overhead that actually reduces your speed.

Besides, a machine that simulates a brain is always more complex than the brain itself, by a large margin, and the brain is a very optimized piece of circuitry. It's not given that we could fit such a computer in any meaningful size.
cyberCMDR
not rated yet Dec 01, 2012
What they need to do is to create hardware modules that can act as a small portion of the neural net (say about 10,000 neurons in programmable memory), and put them together. Hardware can go much faster than software.
MrVibrating
5 / 5 (1) Dec 01, 2012
@Eikka - i didn't mean to imply speed was a straightforward synonym for power; a modern i7 processor clocks up over 300,000x more instructions per second than a humble old 6502, so no, 1,000x higher speed isn't the most telling statistic, i agree... however this only underlines my point wrt our looming capabilities.

Of course if we compared modern terraflop GFX cards against old VGA chips the progress is even more pronounced.

I also agree a simulator is usually more complex than the simulation, though i'm not sure that's necessarily always so, still, once single-molecule transistors are practical, data densities go up to exabytes per gram of storage material, with enough room for dozens of whole-brain images on a microSD card.

Moore referred specifically to transistor value rather than speed or power, however even as a more vernacular by-word for the inevitability of progress, all indications are that we'll continue working around the limits and opening up new avenues..
Parsec
not rated yet Dec 02, 2012
I suspect that within 10 years computer hardware will be constructed that emulates brain material much more directly. That is, the structure of the hardware itself will look much closer to a natural brain that it does to a conventional computer.

This is really when brains with a few billion or more neurons will become possible.
MrVibrating
not rated yet Dec 02, 2012
I suspect that within 10 years computer hardware will be constructed that emulates brain material much more directly. That is, the structure of the hardware itself will look much closer to a natural brain that it does to a conventional computer.

This is really when brains with a few billion or more neurons will become possible.
Yes, memristors for example will be just one such step in that direction...
Eikka
5 / 5 (1) Dec 03, 2012
I also agree a simulator is usually more complex than the simulation, though i'm not sure that's necessarily always so,


If it's more complex, it's called a simulation. If it's less complex, it's called an analog.

still, once single-molecule transistors are practical, data densities go up to exabytes per gram of storage material, with enough room for dozens of whole-brain images on a microSD card.


One has to ask if the brain isn't already using something very similiar, but is constrained in size by the need for redundancy to actually keep it running? DNA is already pretty dense in terms of information, and the proteins it produces are by and large like one molecule transistors - except they do much more.

What use is a circuit made of transistors the size of molecules if it breaks all the time and cannot repair itself?
wwqq
not rated yet Jan 10, 2013
Moore's law "continues" because it has been re-defined to include parallel processing, which in some senses is like saying that two cars go twice as fast as one car.


Bollocks. Moore's law means, and has always meant, that TRANSISTOR DENSITY doubles every ~1.5 years. Moore's law has never said anything about speed.