2020
DOI: 10.48550/arxiv.2006.12878
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures

Abstract: Despite being the workhorse of deep learning, the backpropagation algorithm is no panacea. It enforces sequential layer updates, thus preventing efficient parallelization of the training process. Furthermore, its biological plausibility is being challenged. Alternative schemes have been devised; yet, under the constraint of synaptic asymmetry, none have scaled to modern deep learning tasks and architectures. Here, we challenge this perspective, and study the applicability of Direct Feedback Alignment to neural… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 34 publications
1
2
0
Order By: Relevance
“…The results presented here suggest that our framework for artificial neural networks has the potential to be used as a computational tool for better understanding the role of balanced excitation and inhibition in neuronal organization and plasticity, and as a biologically inspired building block for more complex neural network models. This approach is in line with a growing body of work on biologically constrained computational models for machine learning and neuroscience (Akrout et al, 2019;Bellec et al, 2019;Frenkel et al, 2021;Launay et al, 2020;Lillicrap et al, 2016;Nøkland, 2016;Song et al, 2021;Tanaka et al, 2019;Zhou et al, 2021).…”
Section: Introductionsupporting
confidence: 62%
“…The results presented here suggest that our framework for artificial neural networks has the potential to be used as a computational tool for better understanding the role of balanced excitation and inhibition in neuronal organization and plasticity, and as a biologically inspired building block for more complex neural network models. This approach is in line with a growing body of work on biologically constrained computational models for machine learning and neuroscience (Akrout et al, 2019;Bellec et al, 2019;Frenkel et al, 2021;Launay et al, 2020;Lillicrap et al, 2016;Nøkland, 2016;Song et al, 2021;Tanaka et al, 2019;Zhou et al, 2021).…”
Section: Introductionsupporting
confidence: 62%
“…DFA is a biologically inspired alternative to backpropagation with an asymmetric backward pass. For ease of notation, we introduce it for fully connected networks but it generalizes to convolutional networks, transformers and other architectures [16]. It has been theoretically studied in [20,26].…”
Section: Learning With Direct Feedback Alignment (Dfa)mentioning
confidence: 99%
“…These studies highlighted that a fixed random matrix can indeed support learning in neural networks, indicating that symmetric weight matrices in backpropagation might not be essential. However, follow-up studies (Bartunov et al 2018;Launay et al 2020) have suggested that, while this approach works for simpler tasks, it struggles when applied to more complex ones. Target propagation (TP) (LeCun 1986;Le Cun, Galland, and Hinton 1988;Bengio 2014) is a biologically more plausible feedback algorithm than the error backpropagation which updates network parameters with layer-wise local loss.…”
Section: Introductionmentioning
confidence: 99%