Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply-accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware-software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with 'polarity inversion' to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today's graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers.
Resistive switching (RS) is an interesting property shown by some materials systems that, especially during the last decade, has gained a lot of interest for the fabrication of electronic devices, with electronic nonvolatile memories being those that have received the most attention. The presence and quality of the RS phenomenon in a materials system can be studied using different prototype cells, performing different experiments, displaying different figures of merit, and developing different computational analyses. Therefore, the real usefulness and impact of the findings presented in each study for the RS technology will be also different. This manuscript describes the most recommendable methodologies for the fabrication, characterization, and simulation of RS devices, as well as the proper methods to display the data obtained. The idea is to help the scientific community to evaluate the real usefulness and impact of an RS study for the development of RS technology.
A key concern for resistive-switching random access memory (RRAM) is the read noise, due to the structural, chemical, and electrical modifications taking place at the localized current path, or conductive filament (CF). Read noise typically appears as a random telegraph noise (RTN), where the current randomly fluctuates between ON and OFF levels. This paper addresses RTN in RRAM, providing physical interpretations and models for the dependence on the programming and read conditions. First, we explain the RTN dependence on the compliance current during set transition in terms of the sizedependent depletion of carriers within the CF. Then, we discuss the bias dependence of the RTN switching times and amplitude, which can be explained by Joule heating and Poole-Frenkel barrier modifications arising from the electrostatics of the RTN fluctuating center.Index Terms-Noise fluctuations, random telegraph noise (RTN), resistive-switching random access memory (RRAM).
Artificial intelligence (AI) has the ability of revolutionizing our lives and society in a radical way, by enabling machine learning in the industry, business, health, transportation, and many other fields. The ability to recognize objects, faces, and speech, requires, however, exceptional computational power and time, which is conflicting with the current difficulties in transistor scaling due to physical and architectural limitations. As a result, to accelerate the progress of AI, it is necessary to develop materials, devices, and systems that closely mimic the human brain. In this work, we review the current status and challenges on the emerging neuromorphic devices for brain-inspired computing. First, we provide an overview of the memory device technologies which have been proposed for synapse and neuron circuits in neuromorphic systems. Then, we describe the implementation of synaptic learning in the two main types of neural networks, namely the deep neural network and the spiking neural network (SNN). Bio-inspired learning, such as the spike-timing dependent plasticity scheme, is shown to enable unsupervised learning processes which are typical of the human brain. Hardware implementations of SNNs for the recognition of spatial and spatio-temporal patterns are also shown to support the cognitive computation in silico. Finally, we explore the recent advances in reproducing bio-neural processes via device physics, such as insulating-metal transitions, nanoionics drift/diffusion, and magnetization flipping in spintronic devices. By harnessing the device physics in emerging materials, neuromorphic engineering with advanced functionality, higher density and better energy efficiency can be developed.
Resistive switching memory (RRAM) has been proposed as artificial synapse in neuromorphic circuits due to its tunable resistance, low power operation, and scalability. For the development of high-density neuromorphic circuits, it is essential to validate state-of-the-art bistable RRAM and to introduce small-area building blocks serving as artificial synapses. This work introduces a new synaptic circuit consisting of a one-transistor/one-resistor (1T1R) structure, where the resistive element is a HfO2 RRAM with bipolar switching. The spiketiming dependent plasticity (STDP) is demonstrated in both the deterministic and stochastic regimes of the RRAM. Finally, a fully-connected neuromorphic network is simulated showing online unsupervised pattern learning and recognition for various voltages of the POST spike. The results support bistable RRAM for high-performance artificial synapses in neuromorphic circuits.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.