Scientists build a neural network using plastic memristors

January 29, 2016
Memristor. Credit: courtesy authors of the study

A collaborative of Russian and Italian scientists has created a neural network based on polymeric memristors, devices that can potentially be used to build fundamentally new computers. According to the researchers, these developments have applications in systems for machine vision, hearing, and other sensory organs, and also intelligent control systems for various devices, including autonomous robots.

The authors of the new study focused on a promising area in the field of memristive neural networks—polymer-based memristors—and discovered that creating even the simplest perceptron is not that easy. In fact, it is so difficult that up until the publication of their paper in the journal Organic Electronics, there were no reports of any successful experiments (using organic materials). The experiments reported demonstrate that it is possible to create very simple polyaniline-based neural networks. Furthermore, these networks are able to learn and perform specified .

A memristor is an electric element similar to a conventional resistor. The difference between a memristor and a traditional element is that the electric resistance in a memristor is dependent on the charge passing through it. Therefore, it constantly changes its properties under the influence of an external signal—a memristor has a memory and at the same time is also able to change data encoded by its resistance state. In this sense, a memristor is similar to a synapse, a connection between two neurons in the brain with a high level of plasticity that is able to modify the efficiency of signal transmission between neurons under the influence of the transmission itself. A memristor enables scientists to build a true , and the physical properties of memristors mean that at a minimum, they can be made as small as conventional chips.

Some estimates indicate that the size of a memristor can be reduced up to ten nanometers, and the technologies used in the manufacture of the experimental prototypes could, in theory, be scaled up to the level of mass production. However, as this is theoretical, it does not mean that chips of a fundamentally new structure with neural networks will be available on the market any time soon.

The plastic polyaniline was not chosen by chance. Previous studies demonstrated that it can be used to create individual memristors, so the scientists did not have to test many different materials. Using a polyaniline solution, a glass substrate, and chromium electrodes, they created a prototype with dimensions that, at present, are much larger than those typically used in conventional microelectronics: the strip of the structure was approximately one millimeter wide (they decided to avoid miniaturization for the moment). All of the memristors were tested for their electrical characteristics: it was found that the current-voltage characteristic of the devices is in fact non-linear, which is in line with expectations. The memristors were then connected to a single neuromorphic network.

A current-voltage characteristic (or IV curve) is a graph where the horizontal axis represents voltage and the vertical axis the current. In conventional resistance, the IV curve is a straight line; in strict accordance with Ohm's Law, current is proportional to voltage. For a memristor, however, it is not just the voltage that is important, but the change in voltage: if you begin to gradually increase the voltage supplied to the memristor, it will increase the current passing through it not in a linear fashion, but with a sharp bend in the graph and at a certain point its resistance will fall sharply.

Then if you begin to reduce the voltage, the memristor will remain in its conducting state for some time, after which it will change its properties rather sharply again to decrease its conductivity. Experimental samples with a voltage increase of 0.5V hardly allowed any current to pass through (around a few tenths of a microamp), but when the voltage was reduced by the same amount, the ammeter registered a figure of 5 microamps. Microamps are very small units, but in this case, it is the contrast that is most significant: 0.1 μA to 5 μA, a difference of 50 times. This is more than enough to make a clear distinction between the two signals.

After checking the basic properties of individual memristors, the physicists conducted experiments to train the neural network. The training (it is a generally accepted term and is therefore written without inverted commas) involves applying electric pulses at random to the inputs of a perceptron. If a certain combination of electric pulses is applied to the inputs of a perceptron (e.g. a logic one and a logic zero at two inputs) and the perceptron gives the wrong answer, a special correcting pulse is applied, and after a certain number of repetitions, all the internal parameters of the device (namely memristive resistance) reconfigure themselves. In other words, they are "trained" to give the correct answer.

The scientists demonstrated that after about a dozen attempts, their new memristive network is capable of performing NAND logical operations, and is then able to learn to perform NOR operations. Since it is an operator or a conventional computer that is used to check for the correct answer, this method is called the supervised learning method.

Needless to say, an elementary perceptron of macroscopic dimensions with a characteristic reaction time of tenths or hundredths of a second is not ready for commercial production. However, as the researchers themselves note, their creation was made using inexpensive materials, and the reaction time will decrease as the size decreases. The first prototype was intentionally enlarged to make the work easier; it is physically possible to manufacture more compact chips. In addition, polyaniline can be used in attempts to make a three-dimensional structure by placing the memristors on top of one another in a multi-tiered structure (e.g. in the form of random intersections of thin polymer fibers), whereas modern silicon microelectronic systems, due to a number of technological limitations, are two-dimensional. The transition to the third dimension would potentially offer many new opportunities.

What is meant by "fundamentally different computers"?

The common classification of computers is based either on their casing (desktop/laptop/tablet), or on the type of operating system used (Windows/MacOS/Linux). However, this is only a very simple classification from a user perspective; specialists normally use an entirely different approach based on the principle of organizing computer operations. The computers that users are accustomed to, including tablets, desktop computers, or even on-board computers on spacecraft, are all devices with von Neumann architecture—devices based on independent processors, random access memory (RAM), and read only memory (ROM).

The memory stores the code of a program that is to be executed. A program is a set of instructions for certain operations to be performed with data. Data are also stored and retrieved from memory in accordance with the program; the program's instructions are performed by the processor. There may be several processors working in parallel, and data can be stored in a variety of ways, but there is always a fundamental division between the processor and the memory. Even if the computer is integrated into one single chip, it will still have separate elements for processing information and separate units for storing data. At present, all modern microelectronic systems are based on this principle and this is partly the reason why most people are not even aware that there may be other types of computer systems without processors and memory.

Iif physically different elements are used to store data and store a program, the computer is said to be built using Harvard architecture. This method is used in certain microcontrollers, and in small specialized computing devices. The chip that controls the function of a refrigerator, lift, or car engine (in all these cases a "conventional" computer would be redundant) is a microcontroller. However, neither Harvard, nor von Neumann architectures allow the processing and storage of information to be combined into a single element of a computer system.

However, such systems do exist. Furthermore, if you look at the brain itself as a computer system (though it is not yet known whether the function of the brain is reducible to computations), then you will see that it is not at all built like a computer with von Neumann architecture. Neural networks do not have a specialized computer or separate memory cells. Information is stored and processed in each and every neuron, one element of the computer system, and the human brain has approximately 100 billion of these elements. In addition, almost all of them can work in parallel, which is why the brain is able to process information with great efficiency and at such high speed. Artificial neural networks that are currently implemented on von Neumann computers only emulate these processes: emulation, i.e. step-by-step imitation of functions inevitably leads to a decrease in speed and an increase in energy consumption. In many cases, this is not so critical, but in certain cases it can be.

Devices that are fundamentally the same as neural networks could be used for a variety of tasks. Most importantly, neural networks are capable of pattern recognition; they are used as a basis for recognising handwritten text for example, or signature verification. When a certain pattern needs to be recognised and classified, such as a sound, an image, or characteristic changes on a graph, neural networks are actively used and it is in these fields where gaining an advantage in terms of speed and energy consumption is critical. In a control system for an autonomous flying robot every milliwatt-hour and every millisecond counts, just in the same way that a real-time system to process data from a collider detector cannot take too long to "think" about highlighting particle tracks that may be of interest to scientists from among a large number of other recorded events.

Explore further: Researchers create first neural-network chip built just with memristors

More information: Organic Electronics, dx.doi.org/10.1016/j.orgel.2015.06.015

Related Stories

Brain learning simulated via electronic replica memory

May 18, 2015

Scientists are attempting to mimic the memory and learning functions of neurons found in the human brain. To do so, they investigated the electronic equivalent of the synapse, the bridge, making it possible for neurons to ...

Computers that mimic the function of the brain

April 6, 2015

Researchers are always searching for improved technologies, but the most efficient computer possible already exists. It can learn and adapt without needing to be programmed or updated. It has nearly limitless memory, is difficult ...

A new electronic component to replace flash storage

October 19, 2015

Researchers funded by the Swiss National Science Foundation have created a new electronic component that could replace flash storage. This memristor could also be used one day in new types of computers.

Recommended for you

Swiss unveil stratospheric solar plane

December 7, 2016

Just months after two Swiss pilots completed a historic round-the-world trip in a Sun-powered plane, another Swiss adventurer on Wednesday unveiled a solar plane aimed at reaching the stratosphere.

Solar panels repay their energy 'debt': study

December 6, 2016

The climate-friendly electricity generated by solar panels in the past 40 years has all but cancelled out the polluting energy used to produce them, a study said Tuesday.

Wall-jumping robot is most vertically agile ever built

December 6, 2016

Roboticists at UC Berkeley have designed a small robot that can leap into the air and then spring off a wall, or perform multiple vertical jumps in a row, resulting in the highest robotic vertical jumping agility ever recorded. ...

4 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
3 / 5 (1) Jan 29, 2016
There's another kind of machine that isn't quite a computer but still performs computation.

Imagine that you have a slip of paper with numbered rows - on each row there's written the number of the next row you should look at. Following this sequence of steps you jump through the list back and forth from one line to another, which forms a pattern that may actually be a piece of music, or the mechanical sequence to open the doors of a lift.

That's called a state machine. Each row is a state, and external input can be brought in by selecting the rows you look at based on additional criteria by giving them a number. Such as, "if door is open, add +1 to the number of row you look up next" and that creates a branch in the sequence.

If you then allow the contents of the table to be chanced by the contents of the table, complex computation is achieved by simply stepping through the list, and there's no processor involved - just a bunch of memory - and the data processes itself.
Eikka
3 / 5 (1) Jan 29, 2016
A simple state machine is implemented in hardware by taking a ROM chip and looping some or all of the data output lines back on the address select lines through a buffer. Then you burn an appropriate table of data into the ROM and give it a clock signal - it starts to cycle through the sequence of states, and you can tap into the output lines to extract the sequence for some purpose, like blinking christmas lights.

A complete microprocessor can be implemented in state machines, which was sometimes used in the past because it's used to be faster than implementing the same functions in discrete logic.
promile
Jan 29, 2016
This comment has been removed by a moderator.
Eikka
not rated yet Jan 30, 2016
it is the contrast that is most significant: 0.1 �ĽA to 5 �ĽA,, a difference of 50 times. This is more than enough to make a clear distinction between the two signals
The human synapses integrate signal with at least 40.000 calcium ions, i.e. with nearly 10.000-time higher resolution - which allows to weight and compare signals from much larger number of neurons at the same moment.


The resolution is a matter of signal to noise ratio within the band of interest. If you have no noise, any arbitrary interval can be used to encode any arbitrary number of bits.

Conversely, the ability of human synapses to distinguish between individual calcium ions seems implausible. Maybe it needs a thousand of them to reliably encode a bit of information.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.