Proceedings of the 56th Annual Design Automation Conference 2019 2019
DOI: 10.1145/3316781.3317739
|View full text |Cite
|
Sign up to set email alerts
|

A Configurable Multi-Precision CNN Computing Framework Based on Single Bit RRAM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(39 citation statements)
references
References 12 publications
0
39
0
Order By: Relevance
“…Power-efficiency of state-of-the-art CNNs generally improves via architectural-level techniques, such as quantization [137] and pruning [67]. These techniques do not significantly compromise CNN accuracy as they exploit the sparse nature of CNN applications [80,3,134].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Power-efficiency of state-of-the-art CNNs generally improves via architectural-level techniques, such as quantization [137] and pruning [67]. These techniques do not significantly compromise CNN accuracy as they exploit the sparse nature of CNN applications [80,3,134].…”
Section: Introductionmentioning
confidence: 99%
“…These architectural techniques are applicable to any underlying hardware. There are numerous extensions of quantization [136,137] and pruning [129,36] techniques. In our experiments, we integrate typical quantization [34] and pruning [33] techniques with our proposed hardware-level undervolting technique to further improve the power-efficiency of FPGA-based CNN accelerators.…”
Section: Introductionmentioning
confidence: 99%
“…The study in [58] introduces an integrated CNN accelerator design with a dynamic fixed-point quantization strategy to minimize the computational loss while saving hardware resources and memory bandwidth. Another work in [59] proposes a CNN hardware design which supports configurable multi-precision computation using single bit RRAM. In this design, each layer is computed using a different number of bits, which can significantly reduce energy consumption.…”
Section: E Related Workmentioning
confidence: 99%
“…Note that the 2N c cells associated with each weight parameter w i, j can be stored across multiple banks [21].…”
Section: Mvm Via a Resistive Crossbarmentioning
confidence: 99%