Brain-inspired synaptic transistor learns while it computes

Nov 01, 2013
Several prototypes of the synaptic transistor are visible on this silicon chip. Credit: Eliza Grinnell, Harvard SEAS.

(Phys.org) —It doesn't take a Watson to realize that even the world's best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.

"There's extraordinary interest in building energy-efficient electronics these days," says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. "Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume."

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

"The transistor we've demonstrated is really an analog to the synapse in our brains," says co-lead author Jian Shi, a postdoctoral fellow at SEAS. "Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons."

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum "axon" and "dendrite" terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

"This system changes its conductance in an analog way, continuously, as the composition of the material changes," explains Shi. "It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just 'on' or 'off.'"

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Left to right are Sieu D. Ha and Jian Shi, postdoctoral fellows at Harvard SEAS, and Shriram Ramanathan, associate professor of materials science. Credit: Eliza Grinnell, Harvard SEAS.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

"We exploit the extreme sensitivity of this material," says Ramanathan. "A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency."

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

"In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the 'learning' behavior is more or less temperature insensitive, and that's a big advantage," says Ramanathan. "We can operate this anywhere from about room temperature up to at least 160 degrees Celsius."

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

"In our proof-of-concept device, the time constant is really set by our experimental geometry," says Ramanathan. "In other words, to really make a super-fast , all you'd have to do is confine the liquid and position the gate electrode closer to it."

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this "ultimate fluidic transistor."

He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.

"In the SEAS setting it's very exciting; we're able to collaborate easily with people from very diverse interests," Ramanathan says.

For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.

"You have to build new instrumentation to be able to synthesize these new materials, but once you're able to do that, you really have a completely new material system whose properties are virtually unexplored," Ramanathan says. "It's very exciting to have such to work with, where very little is known about them and you have an opportunity to build knowledge from scratch."

"This kind of proof-of-concept demonstration carries that work into the 'applied' world," he adds, "where you can really translate these exotic electronic properties into compelling, state-of-the-art devices."

Explore further: Researchers increase the switching contrast of an all-optical flip-flop

More information: www.nature.com/ncomms/2013/131… full/ncomms3676.html

Related Stories

An optical switch based on a single nano-diamond

Oct 15, 2013

A recent study led by researchers of the ICFO (Institute of Photonic Sciences) demonstrates that a single nano-diamond can be operated as an ultrafast single-emitter optical switch operating at room temperature. ...

Recommended for you

Intelligent materials that work in space

Oct 23, 2014

ARQUIMEA, a company that began in the Business Incubator in the Science Park of the Universidad Carlos III de Madrid, will be testing technology it has developed in the International Space Station. The technology ...

Using sound to picture the world in a new way

Oct 22, 2014

Have you ever thought about using acoustics to collect data? The EAR-IT project has explored this possibility with various pioneering applications that impact on our daily lives. Monitoring traffic density ...

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
1 / 5 (9) Nov 01, 2013
they continuously adapt to stimuli, strengthening some connections while weakening others.


And also make and break new connections all the time. Something the silicon chip can't hope to do because its construction is fixed.
Lurker2358
1.4 / 5 (10) Nov 01, 2013
For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.


An electronic neural net need not have as many "cells" or synapses of a human brain to be massively beneficial in some problem solving applications and search applications.

You would have conventional processors for functions where those prove best, and the neural net processors for those functions where they prove best.

An example might be an adaptive A.I. for a Real Time Strategy game.

If an electronic neural net processor can one day beat a "Grand Master" level human being consistently in a Real Time Strategy game, does that make it sentient?

Humans have a moral responsibility to one another when making something that could become sentient. What right does one man have to make an intelligence which could become super-human without the consent of everyone else?
megmaltese
1.4 / 5 (11) Nov 01, 2013
Skynet is here...
Code_Warrior
2 / 5 (1) Nov 01, 2013
This all sounds great in theory, but I'm thinking about saving the state of the synapses to external storage. How exactly would that be done? Also, how would the synapse patterns stored in external storage be re-loaded into the neural net?

While each individual device could be trained to excel at some task that a computer doesn't do well, it seems like having the ability to reset and load different learned patterns into such a brain would be desirable. Kind of like having a neural co-processor that can be loaded with different applications. Keep the best of what current systems can do, and add a neural co-processor for those things they can't do well.

Also, I'm not sure that I like the idea of continuous learning. It would be nice to have a way to fix the current pattern and prevent further learning. This would avoid potentially damaging stimulations from negatively impacting the learned pattern.
grondilu
not rated yet Nov 01, 2013
I don't understand everything, but I have the feeling this is big. Kind of reminds me of memristors. Doesn't this new kind of transistor do pretty much the same thing? If so, does that open the door to very small, energy-efficient hardware implementation of artificial neurons?

Should we panic now and prepare for the robot apocalypse?
Lurker2358
1 / 5 (8) Nov 02, 2013
This all sounds great in theory, but I'm thinking about saving the state of the synapses to external storage. How exactly would that be done? Also, how would the synapse patterns stored in external storage be re-loaded into the neural net?


You make use of a concept I invented a few years ago, about just this topic for use in "hybrid" computers. I called it the "Spy". You have a circuit which can monitor and read the state of each potential synapse, and you store those states as data on a hard disk. You then need a "writer" which can write those states back to other synapses, for example as a backup, or for "imaging" a pattern onto multiple systems.

Of course, this involves a massive amount of parallelism and redundancy, but as far as I can tell there isn't much else you can do in a "secure" fashion.

It's not all that different from reading or writing to a file, except you have one set of circuitry designed to read or over-ride the other set.
Zephir_fan
Nov 02, 2013
This comment has been removed by a moderator.
beleg
1 / 5 (3) Nov 02, 2013
@E
Your comment reminded this reader of the following:
Scientists discover that DNA damage occurs as part of normal brain activity.

http://medicalxpr...ain.html

Here two excerpts:
"It is both novel and intriguing team's finding that the accumulation and repair of DSBs (type of DNA damage, known as a double-strand break) may be part of normal learning" said Fred H. Gage, PhD, of the Salk Institute.

Senior Investigator Lennart Mucke, MD, report in Nature Neuroscience that DSBs in neuronal cells in the brain can also be part of normal brain functions such as learning—as long as the DSBs are tightly controlled and repaired in good time.

Chances are no one thought of this research when reading your comment until now.
This also suggests the researchers reported here found other research and reports for their inspiration on which to draw analogies.
antialias_physorg
5 / 5 (2) Nov 02, 2013
And also make and break new connections all the time. Something the silicon chip can't hope to do because its construction is fixed.

On silicon chips the logic is in the software which makes or brakes the connections. It's a wholly different paradigm.

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

That could be read soooo wrong.

This all sounds great in theory, but I'm thinking about saving the state of the synapses to external storage. How exactly would that be done?

It's an interesting problem. I'd do it via a number of test pulses (like tomography). As for how to recreate a stored pattern in another chip: that one is tricky. But one could use the same approach: send targetted pulses down the line until you get a (similar) response. It wouldn't be a 1:1 copy but a 'fuzzy' copy.
rsklyar
1 / 5 (1) Nov 03, 2013
Beware that a gang of Harvard "researchers" has already stole in Nature journals and, with further support of the MIT's ones, in ASC Nano Lett both the ideas and money of taxpayers. There are numerous swindlers from David H. Koch Inst. for Integrative Cancer Research and Dept of Chemical Engineering, also with Dept of Chemistry and Chem. Biology and School of Eng and Applied Science of Harvard University at http://issuu.com/...vard_mit & http://issuu.com/...llsens12 .
Their plagiaristic "masterpieces" titled Macroporous nanowire nanoelect scaffolds for synthetic tissues (DOI: 10.1038/NMAT3404) and 'Outside Looking In: Nanotube Transistor Intracellular Sensors' (dx.doi.org/10.1021/nl301623p) were funded by NIH Director's Pioneer Award (1DP1OD003900) and a McKnight Foundation Technological Innovations in Neurosc Award, also a Biotechnology Research Endowment from the Dep. of Anesthes at Children's Hospital Boston and NIH grant GM073626, DE01-3023&6516.