2021
DOI: 10.1109/tcsi.2021.3072200
|View full text |Cite
|
Sign up to set email alerts
|

RRAM for Compute-in-Memory: From Inference to Training

Abstract: To efficiently deploy machine learning applications to the edge, compute-in-memory (CIM) based hardware accelerator is a promising solution with improved throughput and energy efficiency. Instant-on inference is further enabled by emerging non-volatile memory technologies such as resistive random access memory (RRAM). This paper reviews the recent progresses of the RRAM based CIM accelerator design. First, the multilevel states RRAM characteristics are measured from a test vehicle to examine the key device pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(22 citation statements)
references
References 57 publications
0
20
0
Order By: Relevance
“…For example, it has been reported that to avoid the large decline in inference accuracy of the CIFAR-10 dataset, less than 1% variation in the overall conductivity range of RRAM is required. [22] Favorably, cycle-tocycle variation is mainly attenuated through iterative training and can be improved through the write-verify process. [23,24] In addition, small noise injected during training can improve the robustness for subsequent inference variation by preventing convergence to the local minima in the energy landscape of the DNN model.…”
Section: Variation Device Sizementioning
confidence: 99%
“…For example, it has been reported that to avoid the large decline in inference accuracy of the CIFAR-10 dataset, less than 1% variation in the overall conductivity range of RRAM is required. [22] Favorably, cycle-tocycle variation is mainly attenuated through iterative training and can be improved through the write-verify process. [23,24] In addition, small noise injected during training can improve the robustness for subsequent inference variation by preventing convergence to the local minima in the energy landscape of the DNN model.…”
Section: Variation Device Sizementioning
confidence: 99%
“…where G ij is the programmed conductance value, representing the network weight. Since it is not possible to program memristor cells to negative conductances, two adjacent crossbars or bit-lines are used in tandem instead [11], such that the actual weight value is G crossbar1 − G crossbar2 , this also allows us to double the effective range of an n-bit quantization scheme without adding a complement bit, since both negative and positive weights can be represented in nbits just on different crossbars, we will be utilising this fact throughout this work.…”
Section: Preliminaries a Memristor Crossbars As Neural Network Accele...mentioning
confidence: 99%
“…The study conducted by Yu et al in Ref. [31] using TSMC ® 40nm RRAM technology and Intel ® 22nm RRAM technology was used to build a VGG-8 NN and trained on a CIFAR-10 dataset using a modified Neurosim, with the intention of optimizing the use of ADC by using a MUX based ADC. The same group utilized a mixed RRAM design with RRAM memory designed for MSB bits, and a regular memory used for LSB bits as shown in Ref.…”
Section: B Variability Study In Neuromorphic Learning Circuits On The...mentioning
confidence: 99%