2021
DOI: 10.1109/tcad.2020.2989373
|View full text |Cite
|
Sign up to set email alerts
|

ITT-RNA: Imperfection Tolerable Training for RRAM-Crossbar-Based Deep Neural-Network Accelerator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 20 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…[25,26] Addtionally, process variation and stuck-at fault errors have been widely reported to cause performance degradation, although can be partially compensated by various methods with extra costs. [27][28][29][30][31][32][33][34][35][36][37][38][39] These issues can be generally attributed to the nonbiological conventional learning algorithm of DNN, that is, error backpropagation-based gradient descent weight update, [40,41] which needs both the VMMs and the conductance tuning in high precision. [42][43][44] Novel neural network structures and learning algorithms need to be explored to address these issues.…”
Section: Doi: 101002/aisy202100249mentioning
confidence: 99%
“…[25,26] Addtionally, process variation and stuck-at fault errors have been widely reported to cause performance degradation, although can be partially compensated by various methods with extra costs. [27][28][29][30][31][32][33][34][35][36][37][38][39] These issues can be generally attributed to the nonbiological conventional learning algorithm of DNN, that is, error backpropagation-based gradient descent weight update, [40,41] which needs both the VMMs and the conductance tuning in high precision. [42][43][44] Novel neural network structures and learning algorithms need to be explored to address these issues.…”
Section: Doi: 101002/aisy202100249mentioning
confidence: 99%
“…Some efficient methods have been recently proposed to detect the types and locations of faults in memristor crossbars, but they mainly focused on hard faults, while soft faults were unaddressed. [20,34,35] Song et al [19] proposed to detect the conductance variation (i.e., soft fault) of every memristor and record it together with the location of this memristor in a buffer. This bit-wise detection method is relatively slow and the stored information of every memristor's conductance variation value is redundant.…”
Section: Multifault Detectionmentioning
confidence: 99%
“…Liu et al [ 16 ] applied a fault‐aware retraining scheme to SLP on the MNIST dataset (note: the MNIST dataset is always used hereafter unless otherwise specified), recovering the accuracy to 98.1% of the ideal value in the presence of 20% hard faults. Chen and Song et al [ 15,19 ] used a bipartile‐matching algorithm to map significant weights to the fault‐free memristors. They further reduced the large weights mapped onto the faulty memristors and then performed retraining.…”
Section: Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations