2017 IEEE 6th Non-Volatile Memory Systems and Applications Symposium (NVMSA) 2017
DOI: 10.1109/nvmsa.2017.8064465
|View full text |Cite
|
Sign up to set email alerts
|

A quantization-aware regularized learning method in multilevel memristor-based neuromorphic computing system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 13 publications
0
14
0
Order By: Relevance
“…In terms of the accuracy-sensitive scenario, a rate-loss as large as 2.9% can be considered to be significant. However, some neural network’s applications such as Quantization Neural Network (QNN) may focus on the simple and fast computation more than the network’s recognition performance [32,33,34,35]. Usually, for the edge-computing applications, the power and time should be seriously considered.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In terms of the accuracy-sensitive scenario, a rate-loss as large as 2.9% can be considered to be significant. However, some neural network’s applications such as Quantization Neural Network (QNN) may focus on the simple and fast computation more than the network’s recognition performance [32,33,34,35]. Usually, for the edge-computing applications, the power and time should be seriously considered.…”
Section: Resultsmentioning
confidence: 99%
“…One more thing to note here is that the rate loss due to the partial gating scheme can be compared to the state-of-the-art CMOS-implemented QNN, which was developed to replace the high-precision computation with the low-precision one [32]. Using the low-precision multiplication can result in 4~6X speed-up, in spite of roughly ~1–3% loss of recognition performance [32,33,34]. This ~1–3% loss is comparable to the rate loss that is due to the partial gating scheme, as shown in Figure 7 and Figure 8.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Except for device engineering optimization, algorithm-level technique is also helpful to relax the requirement of the resistance state resolution. By modifying the regularization in training, the weight distribution can be tuned to fit the resistance values [93]. In this way, algorithm-level weights are tailored to enhance the accuracy of eNVM-based NCS.…”
Section: Precisionmentioning
confidence: 99%
“…[ 14 , 15 , 16 ]. Memristors demonstrated experimentally in 2008 [ 17 ] are nonvolatile memories, where both binary and multi-level values can be stored [ 18 , 19 , 20 ]. For the architecture, memristor crossbars can be a built-in 3-dimensional multi-layer structure that seems very similar to the biological neuronal structure observed in the human brain [ 21 , 22 , 23 , 24 ].…”
Section: Introductionmentioning
confidence: 99%