The high nonuniformity and low endurance of the resistive switching random access memory (RRAM) are the two major remaining hurdles at the device level for mass production. Incremental step pulse programming (ISPP) can be a viable solution to the former problem, but the latter problem requires material level innovation. In valence change RRAM, electrodes have usually been regarded as inert (e.g., Pt or TiN) or oxygen vacancy (V) sources (e.g., Ta), but different electrode materials can serve as a sink of V. In this work, an RRAM using a 1.5 nm-thick TaO switching layer is presented, where one of the electrodes was V-supplying Ta and the other was either inert TiN or V-sinking RuO. Whereas TiN could not remove the excessive V in the memory cell, RuO absorbed the unnecessary V. By carefully tuning (balancing) the capabilities of V-supplying Ta and V-sinking RuO electrodes, an almost invariant ISPP voltage and a greatly enhanced endurance performance can be achieved.
Crossbar arrays that use resistance‐switching random‐access memory are passive arrays, requiring a high‐performance cell selector. However, the array‐writing margin is seriously limited by unwanted reset (switching from a low‐resistance state to a high‐resistance state) of the parallel connected cell to the selected cell at the moment of reset of the selected cell. This is also closely related to the presence of an interconnection‐wire resistance, which induces a switching‐voltage drop on the wire. Pt/TiO2/TiN selectors, with atomic‐layer‐deposited TiO2 films varying in thickness from 2 to 8 nm, are fabricated using a sputtered‐TiN bottom‐electrode layer. The selector with an 8‐nm‐thick TiO2 layer is found to be optimal, and is connected serially with an external cable to a Pt/2‐nm‐thick HfO2/TiN bipolar resistive‐switching memory cell. It shows good performance without breakdown and significant switching‐voltage increase without compliance current. HSPICE simulation shows that ≈0.5 Mb array size can be obtained by using a widely used tungsten electrode, demonstrating the feasibility of commercializing one‐selector–one‐resistor devices.
This work provides a comprehensive analytical analysis of one‐selector‐one‐resistor (1S1R) crossbar array (CBA) device for hardware neural network (HNN) applications. Simplified analytical device models are prepared from a particular 1S1R device to validate the analysis. The read margin (RM) analysis results show that the V/3 voltage scheme and reduced selector leakage are necessary to maximize the RM and maximum operable size N of the CBA, where N indicates the number of wires (word line or bit line). The write margin (WM) analysis results show that the unwanted switching of the unselected cell during the write operation is unlikely in the 1S1R CBA even with a large N value, despite a voltage drop along the interconnection wire. The analysis of simultaneous multiply‐and‐accumulate operations is conducted using the analytical method to examine the influence of voltage drop according to the wire and memory cells in HNN applications. Reducing the wire resistance and on‐state conductance increases the available N value when the selector operates near the threshold conditions. The proposed analytical model can estimate the maximum accuracy degradation of the HNN through the involvement of the unintentional voltage drop.
Memristor crossbar arrays were fabricated based on a Ti/HfO2/Ti stack that exhibited electroforming-free behavior and low device variability in a 10 x 10 array size. The binary states of high-resistance-state and low-resistance-state in the bipolar memristor device were used for the synaptic weight representation of a binarized neural network. The electroforming-free memristor was confirmed as being suitable as a binary synaptic device because of its higher device yield, lower variability, and less severe malfunction (for example, hard break-down) than the electroformed memristors based on a Ti/HfO2/Pt structure. The feasibly working binarized neural network adopting the electroforming-free binary memristors was demonstrated through simulation.
In spite of remarkable progress in machine learning techniques, the state-of-the-art machine learning algorithms often keep machines from real-time learning (online learning) due in part to computational complexity in parameter optimization. As an alternative, a learning algorithm to train a memory in real time is proposed, which is named as the Markov chain Hebbian learning algorithm. The algorithm pursues efficient memory use during training in that (i) the weight matrix has ternary elements (-1, 0, 1) and (ii) each update follows a Markov chainthe upcoming update does not need past weight memory. The algorithm was verified by two proof-of-concept tasks (handwritten digit recognition and multiplication table memorization) in which numbers were taken as symbols.Particularly, the latter bases multiplication arithmetic on memory, which may be analogous to humans' mental arithmetic. The memory-based multiplication arithmetic feasibly offers the basis of factorization, supporting novel insight into the arithmetic.Recent progress in machine learning (particularly, deep learning) endows artificial intelligence with high precision recognition and problem-solving capabilities beyond the human level. 1,2,3 Computers on the von Neumann architecture are the platform for the breakthroughs albeit frequently powered by hardware accelerators, e.g. graphics processing unit (GPU). 4 The main memory, in this case, is used to store fragmentary information, e.g. weight matrix, representation of hidden neurons, and input datasets, intertwined among the fragments. Therefore, it is conceivable that memory organization is essential to efficient memory retrieval. To this end, memory keeping a weight matrix in place can be considered, in which the matrix matches different representations selectively as a consequence of learning. For instance, a visual input (representation) such as a hand-written '1' recalls a symbolic memory '1' (internal representation) through the stored weight matrix so that the symbolic memory can readily be recalled. In this regard, a high-density crossbar array (CBA) of twoterminal memory elements, e.g. oxide-based resistive memory and phase-change memory, is perhaps a promising solution to machine learning acceleration. 5,6,7,8,9 The connection weight between a pair of neurons is stored in each memory element in the CBA as conductance, and the weight is read out in place by monitoring current in response to a voltage. 5,6,7,8,9 Albeit promising, this approach should address the following challenges; each weight should be pre-calculated beforehand using a conventional error-correcting technique, and the pre-calculated value state needs to be accommodated by a single memory element. The former particularly hinders online learning.In this study, an easy-to-implement algorithm based on a stochastic neural network-termed as the Markov chain Hebbian learning (MCHL) algorithm is proposed. The most notable difference between the MCHL and restricted Boltzmann machine (RBM) 10,11,12,13 is that the MCHL is a discriminative...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.