2021
DOI: 10.3389/fnins.2021.749811
|View full text |Cite
|
Sign up to set email alerts
|

Gradient Decomposition Methods for Training Neural Networks With Non-ideal Synaptic Devices

Abstract: While promising for high-capacity machine learning accelerators, memristor devices have non-idealities that prevent software-equivalent accuracies when used for online training. This work uses a combination of Mini-Batch Gradient Descent (MBGD) to average gradients, stochastic rounding to avoid vanishing weight updates, and decomposition methods to keep the memory overhead low during mini-batch training. Since the weight update has to be transferred to the memristor matrices efficiently, we also investigate th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 54 publications
0
1
0
Order By: Relevance
“…Stochastic rounding can be particularly useful in deep network training with low bit precision arithmetic. [ 31 , 32 ] A real template value a which lies between a lower weight level ( A 1 ) and upper weight level ( A 2 ) was stochastically rounded up to A 2 with probability ( a – A 1 )/( A 2 – A 1 ) and down to A 1 with probability ( A 2 – a )/( A 2 – A 1 ). The algorithm details are included in the supplemental materials.…”
Section: Methodsmentioning
confidence: 99%
“…Stochastic rounding can be particularly useful in deep network training with low bit precision arithmetic. [ 31 , 32 ] A real template value a which lies between a lower weight level ( A 1 ) and upper weight level ( A 2 ) was stochastically rounded up to A 2 with probability ( a – A 1 )/( A 2 – A 1 ) and down to A 1 with probability ( A 2 – a )/( A 2 – A 1 ). The algorithm details are included in the supplemental materials.…”
Section: Methodsmentioning
confidence: 99%