2019
DOI: 10.3389/fnins.2019.00525
|View full text |Cite
|
Sign up to set email alerts
|

Direct Feedback Alignment With Sparse Connections for Local Learning

Abstract: Recent advances in deep neural networks (DNNs) owe their success to training algorithms that use backpropagation and gradient-descent. Backpropagation, while highly effective on von Neumann architectures, becomes inefficient when scaling to large networks. Commonly referred to as the weight transport problem, each neuron's dependence on the weights and errors located deeper in the network require exhaustive data movement which presents a key problem in enhancing the performance and energy-efficiency of machine… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(38 citation statements)
references
References 38 publications
0
29
0
Order By: Relevance
“…First, training of the forward and backward weights may be performed separately, and hence the forward and backward pass of backpropagation may be performed asynchronously. Thus the method may be applied on distributed systems in which synchronization is difficult or time consuming [10,9], including some integrated circuits [1]. Second, by relying on random perturbations to measure gradients, the method does not rely on the environment to provide gradients.…”
Section: Discussionmentioning
confidence: 99%
“…First, training of the forward and backward weights may be performed separately, and hence the forward and backward pass of backpropagation may be performed asynchronously. Thus the method may be applied on distributed systems in which synchronization is difficult or time consuming [10,9], including some integrated circuits [1]. Second, by relying on random perturbations to measure gradients, the method does not rely on the environment to provide gradients.…”
Section: Discussionmentioning
confidence: 99%
“…Work has also been done to use it as an alternative to backpropagation [13,14,15,16,17], however, the results have not been so strong. Hebbian learning differs from other forms of biologically plausible learning such as feedback alignment [18,19,20], target propagation [21,22] and others [23,24], in that it is completely unsupervised and no information or feedback in any form is passed to it.…”
Section: Related Workmentioning
confidence: 99%
“…However, endowing these systems with online learning abilities remains an open challenge. Since providing a top-down error signal has been very successful in deep learning [7], [8], some neuromorphic implementations have recently focused on using errors for online learning [9], [10]. Moreover, many neuroscientific studies have recently focused on error-based learning in the brain [11], [12].…”
Section: Introductionmentioning
confidence: 99%