2022
DOI: 10.1038/s41586-022-04992-8
|View full text |Cite
|
Sign up to set email alerts
|

A compute-in-memory chip based on resistive random-access memory

Abstract: Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2–5. Although recent studies have demonst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
228
0
5

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 403 publications
(234 citation statements)
references
References 54 publications
(108 reference statements)
1
228
0
5
Order By: Relevance
“…After programming, the devices were allowed to de-trap for a minimum of 300ms. A details analysis of the WRITE-to-READ delay on similar devices is shown elsewhere [19]. READ operation was conducted by a voltage ramp with a step size of 100mV at W.L.…”
Section: A Device Characterizationmentioning
confidence: 99%
“…After programming, the devices were allowed to de-trap for a minimum of 300ms. A details analysis of the WRITE-to-READ delay on similar devices is shown elsewhere [19]. READ operation was conducted by a voltage ramp with a step size of 100mV at W.L.…”
Section: A Device Characterizationmentioning
confidence: 99%
“…Their real-time data have manifested the need to overcome latency and energy costs induced by the data transfer between the processing unit and memory in von Neumann architecture. Therefore, researchers showed interest in building an in-memory-computing (IMC) based alternative paradigm, where the computation is done inside the memory, reducing the latency and energy cost. The quintessential example of IMC is vector-matrix multiplication (VMM) with nonvolatile memories (NVMs), which is applied to solve many high-level applications such as neuromorphic computing and to solve computationally tricky problems. During the execution of VMM for neuromorphic computing, the memory unit must perform computations using single-instruction data sets.…”
Section: Introductionmentioning
confidence: 99%
“…When the voltage applied between the two electrodes reaches the set voltage and reset voltage, respectively, the resistance value of the switching layer will reversibly switch between a high resistance state (HRS) and a low resistance state (LRS) [14,15], which is several orders of magnitude smaller than HRS. Importantly, lower applied voltages enable memristors to have a characteristic of low power consumption [16][17][18]. Meanwhile, the fabrication process of memristors is compatible with CMOS process technology [19], allowing it to be large cross-bar arrays easily and exhibit a size scalability.…”
Section: Introductionmentioning
confidence: 99%