2022
DOI: 10.1162/neco_a_01547
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Backpropagation: Bilevel Optimization Through Implicit Differentiation and Equilibrium Propagation

Abstract: This review examines gradient-based techniques to solve bilevel optimization problems. Bilevel optimization extends the loss minimization framework underlying statistical learning to systems that are implicitly defined through a quantity they minimize. This characterization can be applied to neural networks, optimizers, algorithmic solvers, and even physical systems and allows for greater modeling flexibility compared to the usual explicit definition of such systems. We focus on solving learning problems of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…This approximation occurs as the strength of the nudging becomes infinitesimal, resulting the difference between the phases resembling a finite-difference gradient [36]. Other efforts have developed theory for EP, casting it in terms of a type of bilevel optimization [85]. However, the original formulations of CHL schemes such as EP generally fail to scale up to complex tasks; to address this issue, variations of this CHL/EP have been designed that improve efficiency as well as performance on particular tasks [24,70,27].…”
Section: Contrastive Hebbian Learningmentioning
confidence: 99%
“…This approximation occurs as the strength of the nudging becomes infinitesimal, resulting the difference between the phases resembling a finite-difference gradient [36]. Other efforts have developed theory for EP, casting it in terms of a type of bilevel optimization [85]. However, the original formulations of CHL schemes such as EP generally fail to scale up to complex tasks; to address this issue, variations of this CHL/EP have been designed that improve efficiency as well as performance on particular tasks [24,70,27].…”
Section: Contrastive Hebbian Learningmentioning
confidence: 99%