Researchers seeking to make computer brains smarter by making them more like our own

May 11, 2015 by Sonia Fernandez, University of California - Santa Barbara
Artist's concept of a neural network. Credit: Illustration by Peter Allen

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit. For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

"It's a small, but important step," said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the 's, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That's because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

What are these functions? Well, you're performing some of them right now. As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it's likely you would still be able to read this and derive the same meaning.

In the researchers' demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters ("z", "v" and "n") by their images, each letter stylized in different ways or saturated with "noise". In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple was able to correctly classify the simple images.

"While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality," said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

"And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner," she said.

Artificial synaptic circuit of the type used in the demonstration. Credit: Sonia Fernandez

Key to this technology is the memristor (a combination of "memory" and "resistor"), an electronic component whose resistance changes depending on the direction of the flow of the electrical charge. Unlike conventional transistors, which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

"The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor," said Strukov. The ionic memory mechanism brings several advantages over purely electron-based memories, which makes it very attractive for artificial neural network implementation, he added.

"For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality," he said. "Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties."

This is where analog memory trumps digital memory: In order to create the same human brain-type functionality with conventional technology, the resulting device would have to be enormous—loaded with multitudes of transistors that would require far more energy.

"Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture," said lead researcher Prezioso. "This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation."

To be able to approach functionality of the human brain, however, many more memristors would be required to build more complex to do the same kinds of things we can do with barely any effort and energy, such as identify different versions of the same thing or infer the presence or identity of an object not based on the object itself but on other things in a scene.

Potential applications already exist for this emerging technology, such as medical imaging, the improvement of navigation systems or even for searches based on images rather than on text. The energy-efficient compact circuitry the researchers are striving to create would also go a long way toward creating the kind of high-performance computers and memory storage devices users will continue to seek long after the proliferation of digital transistors predicted by Moore's Law becomes too unwieldy for conventional electronics.

"The exciting thing is that, unlike more exotic solutions, it is not difficult to imagine this technology integrated into common processing units and giving a serious boost to future computers," said Prezioso.

In the meantime, the researchers will continue to improve the performance of the memristors, scaling the complexity of circuits and enriching the functionality of the . The very next step would be to integrate a memristor neural network with conventional semiconductor technology, which will enable more complex demonstrations and allow this early to do more complicated and nuanced things. Ideally, according to materials scientist Hoskins, this brain would consist of trillions of these type of devices vertically integrated on top of each other.

"There are so many potential applications—it definitely gives us a whole new way of thinking," he said.

The researchers' findings are published in the journal Nature.

Explore further: Researchers create first neural-network chip built just with memristors

More information: Training and operation of an integrated neuromorphic network based on metal-oxide memristors, Nature 521, 61–64 (07 May 2015) DOI: 10.1038/nature14441

Related Stories

Computers that mimic the function of the brain

April 6, 2015

Researchers are always searching for improved technologies, but the most efficient computer possible already exists. It can learn and adapt without needing to be programmed or updated. It has nearly limitless memory, is difficult ...

Blood simple circuitry for cyborgs

March 30, 2011

Could electronic components made from human blood be the key to creating cyborg interfaces? Circuitry that links human tissues and nerve cells directly to an electronic device, such as a robotic limb or artificial eye might ...

Recommended for you

Team breaks world record for fast, accurate AI training

November 7, 2018

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training artificial intelligence (AI) machines faster than ever before while maintaining ...

7 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

gkam
not rated yet May 11, 2015
Big step forward,but even bigger ones are needed. For one, our neuronal connections do not just go from one to another, but branch out radially, making seemingly-random connections, but those which result in memories or actions.

More important, they change connections.

And, we depend more on the brew of chemicals from our ductless glands witch really determine how we feel about things and tend to react to them - the "mind organs". Look into the role of those neurotransmitters in the complex we call the brain.
gkam
not rated yet May 11, 2015
Oops, not witch but which.

How would we provide the spectrum of biasing agents for AI which drive us as emotions?

Clearly, we have outgrown digital processing, and can now do it with analog or quantum processing.
krundoloss
not rated yet May 11, 2015
This is awesome! Ive read countless articles about memristor technology, and it was always in early stages and experimental. But now someone has built a functional neural network, with the ability to process and store information simultaneously. The classical computer has served us well, but this new technology will be more compatible with our way of thinking. Can a Neural network understand? Perhaps in a similar way that we can? I have to wonder if this can create true AI, it would seem logical that learning algorithms combined with advanced object recognition, at the very least we can make excellent robots. It would probably be a good idea to make them weak and slow, lol!
antialias_physorg
not rated yet May 12, 2015
I think it is a far smarter idea to emulate the algorithms the brain uses, rather than the wetware

Hardware has the following advantages
1) it is a lot faster than software
2) it can be massively parallel (which is part of the reason for 1) but has additional advantages)

This all needs to be sorted out by real engineers.

Who do you think is working on this stuff? Amateurs?

I think computer science people haven't been exposed to broader scientific and technical methods

I invite you to take part in (any) first semester CS lecture. You will fail. Badly. CS incorporates a lot of math and physics (and in the specialization courses there are branches that go into medical computing where you are exposed to biology).
krundoloss
not rated yet May 12, 2015
Current CPU's act as "number filters" doing calculations based on the Code that is sent to them, and the architecture of the CPU itself, with complex circuit paths designed to calculate based on the code given to them. There is very little storage of info in a CPU, and it has to communicate with Memory and CPU cache to hold information for processing.

This Neural Network, will have the ability to calculate, but to simultaneously store the info, and have many different states within the artificial neurons. This will allow higher performance, and function more like our brain. It is exciting from an AI perspective, because naturally if we can create a more close duplication of our own neural tissue than we are closer to creating something that can "think" like we do. Great Stuff!
Moebius
not rated yet May 12, 2015
Cool, let's see it produce truly random numbers. Bet it can't because I don't think neuron means what we think it means.
Tessellatedtessellations
not rated yet May 12, 2015
Big step forward,but even bigger ones are needed. For one, our neuronal connections do not just go from one to another, but branch out radially, making seemingly-random connections, but those which result in memories or actions.

More important, they change connections.


I keep wondering if tricks learned in creating FPGAs could help solve that problem. FPGAs are basically hardware that is constructed by software-- software determines connections between fundamental circuits to build a cpu or whatever.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.