IBM creates new foundation to program SyNAPSE chips inspired by human brain

Aug 08, 2013
Visualization of a simulated network of neurosynaptic chips. Credit: IBM

Scientists from IBM today unveiled a breakthrough software ecosystem designed for programming silicon chips that have an architecture inspired by the function, low power, and compact volume of the brain. The technology could enable a new generation of intelligent sensor networks that mimic the brain's abilities for perception, action, and cognition.

Dramatically different from traditional software, IBM's new breaks the mold of sequential operation underlying today's von Neumann architectures and computers. It is instead tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures.

"Architectures and programs are closely intertwined and a new architecture necessitates a new programming paradigm," said Dr. Dharmendra S. Modha, Principal Investigator and Senior Manager, IBM Research. "We are working to create a FORTRAN for synaptic computing chips. While complementing today's computers, this will bring forth a fundamentally new technological capability in terms of programming and applying emerging learning systems."

To advance and enable this new ecosystem, IBM researchers developed the following breakthroughs that support all aspects of the programming cycle from design through development, debugging, and deployment:

- Simulator: A multi-threaded, massively parallel and highly scalable functional software simulator of a architecture comprising a network of neurosynaptic cores.

- Neuron Model: A simple, digital, highly parameterized spiking neuron model that forms a fundamental information processing unit of brain-like computation and supports a wide range of deterministic and stochastic neural computations, codes, and behaviors. A network of such neurons can sense, remember, and act upon a variety of spatio-temporal, multi-modal environmental stimuli.

- Programming Model: A high-level description of a "program" that is based on composable, reusable building blocks called "corelets." Each corelet represents a complete blueprint of a network of neurosynaptic cores that specifies a based-level function. Inner workings of a corelet are hidden so that only its external inputs and outputs are exposed to other programmers, who can concentrate on what the corelet does rather than how it does it. Corelets can be combined to produce new corelets that are larger, more complex, or have added functionality.

- Library: A cognitive system store containing designs and implementations of consistent, parameterized, large-scale algorithms and applications that link massively parallel, multi-modal, spatio-temporal sensors and actuators together in real-time. In less than a year, the IBM researchers have designed and stored over 150 corelets in the program library.

- Laboratory: A novel teaching curriculum that spans the architecture, neuron specification, chip simulator, programming language, application library and prototype design models. It also includes an end-to-end software environment that can be used to create corelets, access the library, experiment with a variety of programs on the simulator, connect the simulator inputs/outputs to sensors/actuators, build systems, and visualize/debug the results.

These innovations are being presented at The International Joint Conference on Neural Networks in Dallas, TX.

This video is not supported by your browser at this time.
In an effort to help usher in a new era of cognitive computing, a team at IBM Research-Almaden has designed a cognitive chip called TrueNorth. It's based on a non-von Neumann computing architecture that's inspired by the function, low power consumption and compactness of the human brain. To help people understand how the technology could be used, they brainstormed a selection of sample applications. Think of them as cognitive apps. In this video, Bill Risk, one of the managers of the SyNAPSE project, explains some of the apps.

Paving the Path to SyNAPSE

Modern computing systems were designed decades ago for sequential processing according to a pre-defined program. Although they are fast and precise "number crunchers," computers of traditional design become constrained by power and size while operating at reduced effectiveness when applied to real-time processing of the noisy, analog, voluminous, Big Data produced by the world around us. In contrast, the brain-which operates comparatively slowly and at low precision-excels at tasks such as recognizing, interpreting, and acting upon patterns, while consuming the same amount of power as a 20 watt light bulb and occupying the volume of a two-liter bottle.

In August 2011, IBM successfully demonstrated a building block of a novel brain-inspired chip architecture based on a scalable, interconnected, configurable network of "neurosynaptic cores." Each core brings memory ("synapses"), processors ("neurons"), and communication ("axons") in close proximity, executing activity in an event-driven fashion. These chips serve as a platform for emulating and extending the brain's ability to respond to biological sensors and analyzing vast amounts of data from many sources at once.

Having completed Phase 0, Phase 1, and Phase 2, IBM and its collaborators (Cornell University and iniLabs, Ltd) have recently been awarded approximately $12 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 3 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, thus bringing the cumulative funding to approximately $53 million.

Smarter Sensors

IBM's long-term goal is to build a chip system with ten billion neurons and hundred trillion synapses, while consuming merely one kilowatt of power and occupying less than two liters of volume.

Systems built from these chips could bring the real-time capture and analysis of various types of data closer to the point of collection. They would not only gather symbolic data, which is fixed text or digital information, but also gather sub-symbolic data, which is sensory based and whose values change continuously. This raw data reflects activity in the world of every kind ranging from commerce, social, logistics, location, movement, and environmental conditions.

Take the human eyes, for example. They sift through over a Terabyte of data per day. Emulating the visual cortex, low-power, light-weight eye glasses designed to help the visually impaired could be outfitted with multiple video and auditory sensors that capture and analyze this optical flow of data.

These sensors would gather and interpret large-scale volumes of data to signal how many individuals are ahead of the user, distance to an upcoming curb, number of vehicles in a given intersection, height of a ceiling or length of a crosswalk. Like a guide dog, sub-symbolic data perceived by the glasses would allow them to plot the safest pathway through a room or outdoor setting and help the user navigate the environment via embedded speakers or ear buds. This same technology—at increasing levels of scale—can form sensory-based data input capabilities for machines, robots, smartphones, and automobiles.

"The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Approved for Public Release, Distribution Unlimited."

Explore further: Avatars make the Internet sign to deaf people

More information: www.research.ibm.com/cognitive… synaptic-chips.shtml

Related Stories

UW team part of IBM 'cognitive' computing chip project

Aug 19, 2011

(PhysOrg.com) -- University of Wisconsin-Madison researchers are part of the IBM-led team that has unveiled a new generation of experimental computer chips - the first step in a project to create a computer that borrows pri ...

Chips that mimic the brain

Jul 22, 2013

No computer works as efficiently as the human brain – so much so that building an artificial brain is the goal of many scientists. Neuroinformatics researchers from the University of Zurich and ETH Zurich have now made ...

Programming model for supercomputers of the future

Jun 10, 2013

The demand for even faster, more effective, and also energy-saving computer clusters is growing in every sector. The new asynchronous programming model GPI from Fraunhofer ITWM might become a key building ...

Recommended for you

Avatars make the Internet sign to deaf people

Aug 29, 2014

It is challenging for deaf people to learn a sound-based language, since they are physically not able to hear those sounds. Hence, most of them struggle with written language as well as with text reading ...

Chameleon: Cloud computing for computer science

Aug 26, 2014

Cloud computing has changed the way we work, the way we communicate online, even the way we relax at night with a movie. But even as "the cloud" starts to cross over into popular parlance, the full potential ...

User comments : 7

Adjust slider to filter visible comments by rank

Display comments: newest first

alfie_null
not rated yet Aug 08, 2013
I wonder how we'll assess the reliability of devices built on this technology? Proving the correctness of existing software is already difficult.
antialias_physorg
not rated yet Aug 08, 2013
In neural nets (those that incorporate stochastic elements) you can never prove correctness. It's a bit like with quantum computers where you can also not prove correctness (for a different reason - though both have ultimately to do with the indeterministic nature of parts of the computation).

You CAN guarantee correctness to within a certain degree. And for many problems that is good enough. Especially given the vast speedup such systems may be able to offer the tradeoff is often well worth it.

Proving correctness for current software is already mostly impossible (except for very specific cases where the design specifes provability which means you aren't allowed to use a lot of language features like break, continue, goto, threads, recursion, etc. ). Such demands make software extraordinarily expensive, so this is reserved for the likes of software in planes, NASA mission critical software, etc.
DonGateley
1 / 5 (1) Aug 08, 2013
And if it loses power? Better be made of ferroelectrics or something equivalent.
antialias_physorg
not rated yet Aug 09, 2013
And if it loses power?

Same problem as with any other computer: the current set of data/computation in memory will be lost.
But there's no reason why you can't take a snapshot of the state and store it (which will be done in any case because you don't want to have to reteach the system from scratch every time you power it up.)
This isn't THAT much different from regular computers.

Note that this is a SOFTWARE framework - and it will run on any kind of hardware (even though it's optimized for highly interconnected supercomputers this will also run - at mcuh reduce speed of course - on your desktop PC)
DonGateley
1 / 5 (1) Aug 09, 2013
It's about the size of its state. This article implies a huge amount of data to dump quick, data that is grown within the devices through experience. Not many artificial systems are this dependent on the sum total of their history. Continuous logging would bog it down terribly if it is generating state data and maintaining it locally on a massively parallel basis. And it must be continuous if it is to be a real time actor which it most certainly has to be. I'm not sure why you would see this as business as usual. Oh, I just noticed who I'm responding to.

Did you read the first paragraph? It's a framework to develop software for a particular hardware base. That it can limp along on a standard processor is only to allow development and testing in a reduced environment.
antialias_physorg
not rated yet Aug 10, 2013
Continuous logging

You don't do continuous logging of states (you don't do that in most application software, either). You take the occasional snapshot. It allows you to recreate a state approximately around the time of the failure - which is perfectly adequate for neural computing.
All you have to log is at what stage you were in feeding in the data at that point and you're good to go again.

This amounts to a memory dump and a pointer. Every now and then (and you can overwrite the old memory dump each time you make a new one as the old one is not useful. There is no sensible 'undo/rollback' feature in neural/stochastic networks). The memory needs for such a dump are basically the sumtotal of RAM you're using to run the program (more than for a conventional progrma - which usually needs just some state variables recorded, but not unfeasably much. A few TB storage is cheap/fast) .
antialias_physorg
not rated yet Aug 10, 2013
Did you read the first paragraph? It's a framework to develop software for a particular hardware base. That it can limp along on a standard processor is only to allow development and testing in a reduced environment.


That's why I said (and I quote myself:)
Note that this is a SOFTWARE framework - and it will run on any kind of hardware (even though it's optimized for highly interconnected supercomputers this will also run - at mcuh reduce speed of course - on your desktop PC)