2023
DOI: 10.1002/adma.202204944
|View full text |Cite
|
Sign up to set email alerts
|

Compute in‐Memory with Non‐Volatile Elements for Neural Networks: A Review from a Co‐Design Perspective

Abstract: Deep learning has become ubiquitous, touching daily lives across the globe. Today, traditional computer architectures are stressed to their limits in efficiently executing the growing complexity of data and models. Compute‐in‐memory (CIM) can potentially play an important role in developing efficient hardware solutions that reduce data movement from compute‐unit to memory, known as the von Neumann bottleneck. At its heart is a cross‐bar architecture with nodal non‐volatile‐memory elements that performs an anal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(17 citation statements)
references
References 209 publications
0
17
0
Order By: Relevance
“…However, these will not be energy efficient enough for the large data-intensive workloads of the future and will require new approaches to matrix multiplication hardware. One of the options is "compute-inmemory" by using a Kirchoff's Law based analog step to carry out a vector dot product that nearly eliminates the shuttling of data between compute and memory through a crossbar array architecture (Haensch et al, 2022;W. Wan et al, 2022).…”
Section: Memorymentioning
confidence: 99%
“…However, these will not be energy efficient enough for the large data-intensive workloads of the future and will require new approaches to matrix multiplication hardware. One of the options is "compute-inmemory" by using a Kirchoff's Law based analog step to carry out a vector dot product that nearly eliminates the shuttling of data between compute and memory through a crossbar array architecture (Haensch et al, 2022;W. Wan et al, 2022).…”
Section: Memorymentioning
confidence: 99%
“…Among the various non Von Neumann/neuromorphic computing schemes recently proposed and implemented, crossbar arrays based on NVM devices as synapses can be deemed to be the most popular. These crossbar arrays carry out analog computing (both inference and training of neural networks (NN)) in analog memory systems (in-memory computing) very efficiently, both in terms of speed and energy [1][2][3][4][5]. During forward computation/inference, the crossbar array carries out vector-matrix multiplication (VMM) between the input vector and the synaptic weight matrix very fast by making use of Ohm's Law and Kirchoff 's Current Law.…”
Section: Introduction 1motivationmentioning
confidence: 99%
“…Interestingly, Mou et al recently showed that the topotactic transition in SrCoO x could be highly beneficial for memristive applications since the resistive switching process is likely to be better controlled than in most filamentary systems, [18] and other reports have similarly demonstrated interest in this and related systems for memristive applications. [20,23,24] Here, the ordered vacancy channels allow a pre-defined path for oxygen migration without structural breakdown. From a thermodynamic perspective, the energy barrier for switching between the two phases is relatively small -smaller than those of similar systems such as strontium manganite (SrMnO 2.5 and SrMnO 3 − 𝛿 ) [11] -and the application of a bias to a BM-SCO/PV-SCO heterostructure could be used to change the overall resistance of a stack, paving the way for controllable and reversible synaptic memories.…”
Section: Introductionmentioning
confidence: 99%