Steering material scientists to better memory devices

Steering material scientists to better memory devices
Graphical representation of a crossbar array, where different memory devices serve in different roles. Credit: IBM

Ideally, next-generation AI technologies should understand all our requests and commands, extracting them from a huge background of irrelevant information, in order to rapidly provide relevant answers and solutions to our everyday needs. Making these "smart" AI technologies pervasive—in our smartphones, our homes, and our cars—will require energy-efficient AI hardware, which we at IBM Research plan to build around novel and highly capable analog memory devices.

In a recent paper published in Journal of Applied Physics, our IBM Research AI team established a detailed set of guidelines that emerging nano-scaled analog memory devices will need to satisfy in order to enable such energy-efficient AI hardware accelerators.

We had previously shown, in a Nature paper published in June 2018, that training a neural network using highly parallel computation within dense arrays of memory devices such as phase-change memory is faster and consumes less power than using a graphics processing unit (GPU).

The advantage of our approach comes from implementing each weight with multiple devices, each serving in a different role. Some devices are mainly tasked with memorizing long-term information. Other devices are updated very rapidly, changing as training images (such as pictures of trees, cats, ships, etc.) are shown, and then occasionally transferring their learning to the long-term information devices. Although we introduced this concept in our Nature paper using existing devices (phase change memory and conventional capacitors), we felt there should be an opportunity for new memory devices to perform even better, if we could just identify the requirements for these devices.

In our follow-up paper, just published in Journal of Applied Physics, we were able to quantify the device properties that these "long-term information" and "fast-update" devices would need to exhibit. Because our scheme divides tasks across the two categories of devices, these requirements are much less stringent—and thus much more achievable—than before. Our work provides a clear path for material scientists to develop novel devices for energy-efficient AI hardware accelerators based on analog .

More information: Giorgio Cristiano et al. Perspective on training fully connected networks with resistive memories: Device requirements for multiple conductances of varying significance, Journal of Applied Physics (2018). DOI: 10.1063/1.5042462

Stefano Ambrogio et al. Equivalent-accuracy accelerated neural-network training using analogue memory, Nature (2018). DOI: 10.1038/s41586-018-0180-5

Journal information: Journal of Applied Physics , Nature

Provided by IBM

This story is republished courtesy of IBM Research. Read the original story here.

Citation: Steering material scientists to better memory devices (2018, October 11) retrieved 24 March 2023 from https://phys.org/news/2018-10-material-scientists-memory-devices.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

A new brain-inspired architecture could improve how computers handle data and advance AI

3 shares

Feedback to editors