2020
DOI: 10.1038/s41583-020-0277-3
|View full text |Cite
|
Sign up to set email alerts
|

Backpropagation and the brain

Abstract: During learning the brain modifies synapses to improve behaviour. In the cortex synapses are embedded within multi-layered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. The backpropagation algorithm solves this problem in deep artificial neural networks, but has historically been viewed as biologically problematic. Nonetheless, recent developments in neuroscience and the successes of artificial neural networks have reinvigorated int… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
496
0
6

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 646 publications
(513 citation statements)
references
References 147 publications
(216 reference statements)
11
496
0
6
Order By: Relevance
“…These results suggest that the face subspace portion of the representation learned by CORnet-Z may be interpreted in much simpler terms, as a shape appearance model. The results provide an important counter-example to the increasingly popular view that only distributed representations learned by multi-layer networks can well explain IT activity (Kietzmann et al, 2018;Lillicrap et al, 2020). Why a network trained on object classification should learn an approximation to a generative model of faces is an interesting question for future research.…”
Section: Discussionmentioning
confidence: 90%
“…These results suggest that the face subspace portion of the representation learned by CORnet-Z may be interpreted in much simpler terms, as a shape appearance model. The results provide an important counter-example to the increasingly popular view that only distributed representations learned by multi-layer networks can well explain IT activity (Kietzmann et al, 2018;Lillicrap et al, 2020). Why a network trained on object classification should learn an approximation to a generative model of faces is an interesting question for future research.…”
Section: Discussionmentioning
confidence: 90%
“…This information flow may also be viewed as Bayesian belief propagation or (marginal) message passing (Friston et al, 2017c;Parr et al, 2019b). In contrast to variational autoencoders in which training proceeds via backpropagation with separable forward and backward passes-where cost functions both minimize reconstruction loss and deviations between posterior latent distributions and priors (usually taking the form of a unit Gaussian)-training is suggested to occur (largely) continuously in predictive processing (via folded autoencoders), similarly to recent proposals of target propagation (Hinton, 2017;Lillicrap et al, 2020). Note: Folded autoencoders could potentially be elaborated to include attention mechanisms, wherein higher-level nodes may increase the information gain on ascending prediction-errors, corresponding to precision-weighting (i.e., inverse variance over implicit Bayesian beliefs) for selected feature representations.…”
Section: Cortex As Folded Disentangled Variational Autoencoder Heteramentioning
confidence: 99%
“…This information flow may also be viewed as Bayesian belief propagation or (marginal) message passing (Friston et al, 2017;Parr et al, 2019). In contrast to variational autoencoders in which training proceeds via backpropagation with separable forward and backward passes-where cost functions both minimize reconstruction loss and deviations between posterior latent distributions and priors (usually taking the form of a unit Gaussian)-training is suggested to occur (largely) continuously in predictive processing (via folded autoencoders), similarly to recent proposals of target propagation (Hinton, 2017;Lillicrap et al, 2020). Note: Folded autoencoders could potentially be elaborated to include attention mechanisms, wherein higher-level nodes may increase the information gain on ascending prediction-errors, corresponding to precision-weighting (i.e., inverse variance over implicit Bayesian beliefs) over selected feature representations.…”
Section: Repeat Steps 3 and 4 Until Loopy Belief Propagation Convergesmentioning
confidence: 99%