2020
DOI: 10.1016/j.neucom.2020.04.130
|View full text |Cite
|
Sign up to set email alerts
|

SSM: a high-performance scheme for in situ training of imprecise memristor neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 23 publications
0
12
0
Order By: Relevance
“…in situ (re)training of weights (or just a subset of them [ 14 ] ) to recover from the effects of nonidealities, [ 15 , 16 , 17 , 18 , 19 ] including in convolutional neural networks (CNNs), [ 14 , 20 ] recurrent structures, [ 20 ] and in neural networks used for reinforcement learning [ 21 ]…”
Section: Introductionmentioning
confidence: 99%
“…in situ (re)training of weights (or just a subset of them [ 14 ] ) to recover from the effects of nonidealities, [ 15 , 16 , 17 , 18 , 19 ] including in convolutional neural networks (CNNs), [ 14 , 20 ] recurrent structures, [ 20 ] and in neural networks used for reinforcement learning [ 21 ]…”
Section: Introductionmentioning
confidence: 99%
“…But, even with technological maturity, other imperfections may remain due to their inseparable relationship with the physics of the device/circuit (e.g., programming variability and interconnect resistance). Numerous methods have therefore been proposed to mitigate the effect of these non-idealities for NN applications Lim et al, 2018;He et al, 2019;Liu et al, 2020a;Wang et al, 2020b;Mahmoodi et al, 2020;Pan et al, 2020;Zhang et al, 2020;Xi et al, 2021). Although a mitigation approach can significantly increase the performance of RSM-based NNs, none of these methods are able to achieve the accuracy of their software counterparts.…”
Section: Introductionmentioning
confidence: 99%
“…For the tolerance of soft faults, the in situ training which can adjust the weights self‐adaptively on chip is an effective approach. [ 21 ] The fault tolerance can be further improved using some modified weight update rules, such as the Manhattan update rule, [ 22,23 ] stochastic update rule, [ 6,24 ] stochastic sparse update with momentum adaption, [ 25 ] and sign‐based backpropagation (BP) algorithm. [ 26,27 ] The main idea of these modified weight update rules is to move the weights in the general direction that results in the reduction of loss function without regulating the step size.…”
Section: Introductionmentioning
confidence: 99%