2023
DOI: 10.1002/adma.202305465
|View full text |Cite
|
Sign up to set email alerts
|

Bulk‐Switching Memristor‐Based Compute‐In‐Memory Module for Deep Neural Network Training

Yuting Wu,
Qiwen Wang,
Ziyu Wang
et al.

Abstract: The need for deep neural network (DNN) models with higher performance and better functionality leads to the proliferation of very large models. Model training, however, requires intensive computation time and energy. Memristor‐based compute‐in‐memory (CIM) modules can perform vector‐matrix multiplication (VMM) in situ and in parallel, and have shown great promises in DNN inference applications. However, CIM‐based model training faces challenges due non‐linear weight updates, device variations, and low‐precisio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 39 publications
0
8
0
Order By: Relevance
“…Wu et al established a system comprising 4 tiles of RRAM crossbar arrays, which are used to be CIM tiles, together with another necessary peripheral circuits. [101] As shown in Figure 5a, the AXI bus incorporates four separate CIM tiles as self-contained IP. Furthermore, an on-chip RISC-V core functions as the central controller for programming and computing tasks pertaining to the CIM tiles.…”
Section: Network On Chipmentioning
confidence: 99%
See 3 more Smart Citations
“…Wu et al established a system comprising 4 tiles of RRAM crossbar arrays, which are used to be CIM tiles, together with another necessary peripheral circuits. [101] As shown in Figure 5a, the AXI bus incorporates four separate CIM tiles as self-contained IP. Furthermore, an on-chip RISC-V core functions as the central controller for programming and computing tasks pertaining to the CIM tiles.…”
Section: Network On Chipmentioning
confidence: 99%
“…Furthermore, an on-chip RISC-V core functions as the central controller for programming and computing tasks pertaining to the CIM tiles. This modular design allows for scalability by integrating [101] Copyright 2023, Wiley-VCH. b) The architecture of the tiled memristor chip with the STELLAR scheme for efficient improvement learning.…”
Section: Network On Chipmentioning
confidence: 99%
See 2 more Smart Citations
“…To balance accuracy and inference efficiency, the weights and input activations are typically quantized to 8 bits, which has been shown to cause minor accuracy losses, especially when coupled with quantization‐aware training techniques. [ 24 ] Since a single RRAM cell considered here only offers 4‐bit storage (Figure 2c), two cells are used to store one weight value. Figure 2f illustrates the mapping scheme for a convolution layer.…”
Section: Cim System For Dnn Inference Accelerationmentioning
confidence: 99%