2018
DOI: 10.1016/j.artint.2018.03.003
|View full text |Cite
|
Sign up to set email alerts
|

Learning in the machine: Random backpropagation and the deep learning channel

Abstract: Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
98
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 63 publications
(101 citation statements)
references
References 26 publications
3
98
0
Order By: Relevance
“…More specifically, (Baldi et al, 2016) and (Lee et al, 2016) list several reasons why the following requirements of gradient BP make them biologically implausible. The essence of these difficulties is that gradient BP is non-local in space and in time when implemented on a neural substrate, and requires precise linear and non-linear computations.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…More specifically, (Baldi et al, 2016) and (Lee et al, 2016) list several reasons why the following requirements of gradient BP make them biologically implausible. The essence of these difficulties is that gradient BP is non-local in space and in time when implemented on a neural substrate, and requires precise linear and non-linear computations.…”
Section: Discussionmentioning
confidence: 99%
“…Extended simulations suggest that the random BP performance at 10 bits precision is indistinguishable from unquantized weights (Baldi et al, 2016), but whether this is the case for online learning has yet been tested. Here, we hypothesize that 8 bit synaptic weight is a good trade-off between the ability to learn with high accuracy and the cost of implementation in future hardware.…”
Section: Erbp Can Learn With Low Precision Fixed Point Representationsmentioning
confidence: 99%
See 3 more Smart Citations