2022
DOI: 10.48550/arxiv.2206.02629
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Backpropagation at the Infinitesimal Inference Limit of Energy-Based Models: Unifying Predictive Coding, Equilibrium Propagation, and Contrastive Hebbian Learning

Abstract: How the brain performs credit assignment is a fundamental unsolved problem in neuroscience. Many 'biologically plausible' algorithms have been proposed, which compute gradients that approximate those computed by backpropagation (BP), and which operate in ways that more closely satisfy the constraints imposed by neural circuitry. Many such algorithms utilize the framework of energybased models (EBMs), in which all free variables in the model are optimized to minimize a global energy function. However, in the li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…We have provided a detailed analysis of the dynamics of inference in predictive coding, and have shown that there is a tendency for that inference to slow down as training progresses. [20] separates the total energy of an Energy Based Model into the supervised loss (which depends on the errors at the top layer) and the internal energy (which corresponds to the energy of the hidden layers). We have shown that it is only the supervised loss which suffers from a slow-down in inference.…”
Section: Discussionmentioning
confidence: 99%
“…We have provided a detailed analysis of the dynamics of inference in predictive coding, and have shown that there is a tendency for that inference to slow down as training progresses. [20] separates the total energy of an Energy Based Model into the supervised loss (which depends on the errors at the top layer) and the internal energy (which corresponds to the energy of the hidden layers). We have shown that it is only the supervised loss which suffers from a slow-down in inference.…”
Section: Discussionmentioning
confidence: 99%
“…Previous work found that in the limit where ĥl → h l , IL approaches BP/SGD Bogacz 2017, 2019;Millidge et al 2022a;Rosenbaum 2022). This limit is equivalent, in our notation, to the limit where the β → 0 since in this case the output layer ĥL approaches its initial value h L , resulting in ĥl → h l .…”
Section: Reinterpreting Il's Relation To Bp and Sgdmentioning
confidence: 94%