2019
DOI: 10.1002/aisy.201900068
|View full text |Cite
|
Sign up to set email alerts
|

Resistive Memory‐Based In‐Memory Computing: From Device and Large‐Scale Integration System Perspectives

Abstract: In‐memory computing is a computing scheme that integrates data storage and arithmetic computation functions. Resistive random access memory (RRAM) arrays with innovative peripheral circuitry provide the capability of performing vector‐matrix multiplication beyond the basic Boolean logic. With such a memory–computation duality, RRAM‐based in‐memory computing enables an efficient hardware solution for matrix‐multiplication‐dependent neural networks and related applications. Herein, the recent development of RRAM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
47
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 67 publications
(47 citation statements)
references
References 90 publications
0
47
0
Order By: Relevance
“…Several nonidealities cause the deterioration of the performance of memristive crossbar-based neural net architectures. Figure 1 shows some nonidealities such as a limited number of stable resistive states, [36][37][38] conductance variation, [39][40][41] memristor aging issues, [21] endurance, [34,42] reliability issues, [43] and device failure. [35] Limiting the number of stable resistive state leads to low precision of dot product multiplication, which, in turn, reduces the memristive neural network accuracy.…”
Section: Memristor and Memristor Nonidealitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Several nonidealities cause the deterioration of the performance of memristive crossbar-based neural net architectures. Figure 1 shows some nonidealities such as a limited number of stable resistive states, [36][37][38] conductance variation, [39][40][41] memristor aging issues, [21] endurance, [34,42] reliability issues, [43] and device failure. [35] Limiting the number of stable resistive state leads to low precision of dot product multiplication, which, in turn, reduces the memristive neural network accuracy.…”
Section: Memristor and Memristor Nonidealitiesmentioning
confidence: 99%
“…[44] The other type of memristor nonideality consists of nonlinear weight distribution, asymmetry and nonlinear programming, and device-to-device and cycle-to-cycle variation. [39,45,46] The hardware noise and R OFF /R ON ratio can also influence the design of the memristive neural network architectures. In addition, memristive crossbars are affected by sneak path currents, wire resistances, and leakage currents.…”
Section: Memristor and Memristor Nonidealitiesmentioning
confidence: 99%
“…These macro circuits can be tiled together according to the structure of deep neural networks to be constructed. For a more comprehensive review of peripherical circuits and large-scale integration, readers can refer to Yan et al, 2019 . These memristive DL accelerators are projected to be superior to CMOS-based or other solutions in several aspects, such as in performance (operation per second, OPS), area, and power efficiency ( Zhang et al., 2020a ; Sebastian et al., 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…TCAM can perform in-memory search and pattern matching between the query feature vector and stored vectors of binary bits. In the study by Yan et al, 2019a , Yan et al, 2019b , 2-transistor/2-RRAM TCAM cells were used to store the TCAM vectors. For each TCAM cell, the stored TCAM datum was defined as the bit “1” for RRAM1 in HRS and RRAM2 in LRS, the bit “0” for RRAM1 in LRS and RRAM2 in HRS, the bit “X” (do not care bit) for both RRAMs in HRS.…”
Section: Introductionmentioning
confidence: 99%
“…VMM operations are performed where the weights are physically stored, alleviating memory wall problems. [17] Therefore, for the in-memory computing platform based on the cross-point array architecture, [18] selecting the appropriate devices as the Figure 1. Transition to non von-Neumann architecture where the multiple synaptic array blocks for executing VMM in the place where the memories are stored in a similar manner are implemented, thereby eliminating memory wall bottleneck.…”
Section: Introductionmentioning
confidence: 99%