Industry is prototyping and commercializing Resistive Random Access Memories (RRAMs). Unfortunately, RRAM devices introduce new defects and faults. Hence, high-quality test solutions are urgently needed. Based on silicon measurements, this paper identifies a new RRAM unique fault, the Intermittent Undefined State Fault (IUSF); this fault causes the RRAM device to intermittently change its switching mechanism from bipolar to complementary switching, resulting in undefined state faults. First, we characterize the IUSF by analyzing RRAM devices, and demonstrate that a single RRAM device can suffer from the IUSF up to 1.068 % of its switching cycles; we relate the IUSF to two defects: capping layer doping, and over-forming. This clearly shows the importance of detecting this fault. Second, we develop a device-aware defect model that accurately describes the physical behavior of these defects and gives essential insights into the IUSF's behavior and its detection. Third, we perform fault modeling by applying the device-aware defect model, and the results are used to develop high-quality test solutions for the IUSF. The contributions in this work improve the overall RRAM test quality, which enables mass commercialization of RRAMs.
Computation-In-Memory (CIM) using memristive devices is a promising approach to overcome the performance limitations of conventional computing architectures introduced by the von Neumann bottleneck which are also known as memory wall and power wall. It has been shown that accelerators based on memristive devices can deliver higher energy efficiencies and data throughputs when compared with conventional architectures. In the vast multitude of memristive devices, bipolar resistive switches (BRS) based on the valence change mechanism (VCM) are particularly interesting due to their low power operation, non-volatility, high integration density and their CMOS compatibility. While a wide range of possible applications is considered, many of them such as artificial neural networks heavily rely on Vector-Matrix-Multiplications (VMMs) as a mathematical operation. These VMMs are made up of large numbers of Multiplication and Accumulation (MAC) operations. The MAC operation can be realised using memristive devices in an analog fashion using Ohm’s law and Kirchhoff’s law. However, VCM devices exhibit a range of non-idealities, affecting the VMM performance, which in turn impacts the overall accuracy of the application. Those non-idealities can be classified into time-independent (programming variability) and timedependent (read disturb and read noise). Additionally, peripheral circuits such as Analog to Digital Converters (ADCs) can introduce errors during the digitalization. In this work, we experimentally and theoretically investigate the impact of deviceand circuit-level effects on the VMM in a VCM crossbars. Our analysis shows that the variability of the Low Resistive State (LRS) plays a key role and that reading in the RESET direction should be favored to reading in the SET direction.
Spin-torque transfer magnetic random access memory (STT-MRAM) based computation-in-memory (CIM) architectures have shown great prospects for energy-efficient computing. However, device variations and non-idealities narrow down the sensing margin that severely impacts the computing accuracy. In this work, we propose an adaptive referencing mechanism to improve the sensing margin of a CIM architecture for boolean binary logic (BBL) operations. We generate reference signals using multiple STT-MRAM devices and place them strategically into the array such that these signals can address the variations and trace the wire parasitics effectively. We have demonstrated this behavior using STT-MRAM model, which is calibrated using 1Mbit characterization array. Results show that our proposed architecture for binary neural network (BNN) achieves up to 17.8 TOPS/W on the MNIST dataset and 130× performance improvement for text encryption compared to the software implementation on CPU.
Conventional von Neumann architectures cannot successfully meet the demands of emerging computation and data-intensive applications. These shortcomings can be improved by embracing new architectural paradigms using emerging technologies. In particular, Computation-In-Memory (CiM) using emerging technologies such as Resistive Random Access Memory (ReRAM) is a promising approach to meet the computational demands of data-intensive applications such as neural networks and database queries. In CiM, computation is done in an analog manner; digitization of the results is costly in several aspects, such as area, energy, and performance, which hinders the potential of CiM. In this article, we propose an efficient Voltage-Controlled-Oscillator (VCO)–based analog-to-digital converter (ADC) design to improve the performance and energy efficiency of the CiM architecture. Due to its efficiency, the proposed ADC can be assigned in a per-column manner instead of sharing one ADC among multiple columns. This will boost the parallel execution and overall efficiency of the CiM crossbar array. The proposed ADC is evaluated using a Multiplication and Accumulation (MAC) operation implemented in ReRAM-based CiM crossbar arrays. Simulations results show that our proposed ADC can distinguish up to 32 levels within 10 ns while consuming less than 5.2 pJ of energy. In addition, our proposed ADC can tolerate ≈30% variability with a negligible impact on the performance of the ADC.
Faster and cheaper computers have been constantly demanding technological and architectural improvements. However, current technology is suffering from three technology walls: leakage wall, reliability wall, and cost wall. Meanwhile, existing architecture performance is also saturating due to three well-known architecture walls: memory wall, power wall, and instruction-level parallelism (ILP) wall. Hence, a lot of novel technologies and architectures have been introduced and developed intensively. Our previous work has presented a comprehensive classification and broad overview of memory-centric computer architectures. In this article, we aim to discuss the most important classes of memory-centric architectures thoroughly and evaluate their advantages and disadvantages. Moreover, for each class, the article provides a comprehensive survey on memory-centric architectures available in the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.