2021
DOI: 10.48550/arxiv.2106.06044
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convergence and Alignment of Gradient Descent with Random Backpropagation Weights

Abstract: Stochastic gradient descent with backpropagation is the workhorse of artificial neural networks. It has long been recognized that backpropagation fails to be a biologically plausible algorithm. Fundamentally, it is a non-local procedure-updating one neuron's synaptic weights requires knowledge of synaptic weights or receptive fields of downstream neurons. This limits the use of artificial neural networks as a tool for understanding the biological principles of information processing in the brain. Lillicrap et … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…A natural next step is to extend our understanding of alignment dynamics to the problem of learning neural networks. Song et al (2021) show that alignment may not happen in highly overparameterized neural networks. But it is a robust phenomenon in the parameter regimes typically encountered in practice, and therefore important to understand.…”
Section: Discussionmentioning
confidence: 96%
See 1 more Smart Citation
“…A natural next step is to extend our understanding of alignment dynamics to the problem of learning neural networks. Song et al (2021) show that alignment may not happen in highly overparameterized neural networks. But it is a robust phenomenon in the parameter regimes typically encountered in practice, and therefore important to understand.…”
Section: Discussionmentioning
confidence: 96%
“…In recent work, (Song et al, 2021) study feedback alignment for highly overparameterized onehidden layer neural networks where the width of the hidden layer is much larger than the size of training set. This work builds on past work on Neural Tangent Kernels (Jacot et al, 2018), and shows that feedback alignment converges to a solution with zero training error.…”
Section: Related Workmentioning
confidence: 99%
“…Understanding and designing efficient algorithms for risk-sensitive RL in other settings, such as shortest path problems (Min et al, 2021b), off-policy evaluation (Min et al, 2021c) and offline learning (Chen et al, 2021b), may also be of great interest. Moreover, as risk-sensitive RL is closely related to human learning and behaviors, it would be intriguing to study how it synthesizes with relevant areas such as meta learning and bio-inspired learning (Song et al, 2021;Xu et al, 2021). Last but not least, exploring how risk sensitivity could be used to augment unsupervised learning algorithms (Fei & Chen, 2018a,b, 2020Ling et al, 2019) would be an important future topic as well.…”
Section: Lemma C4 Under the Setup Of Lemma C3 We Havementioning
confidence: 99%
“…≤ (q k,+ h,2 − q k h,3 )(s, a) = α 0 t e β(H−h+1) − e of meta learning [44], biologically inspired deep learning [42] and deep reinforcement learning [29]. It would be an exciting research direction to establish connections between these related areas through rigorous and theoretical analysis of deep learning [9,11].…”
Section: C1 Auxiliary Lemmasmentioning
confidence: 99%