2022
DOI: 10.48550/arxiv.2203.02711
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Meta Mirror Descent: Optimiser Learning for Fast Convergence

Abstract: Optimisers are an essential component for training machine learning models, and their design influences learning speed and generalisation. Several studies have attempted to learn more effective gradient-descent optimisers via solving a bilevel optimisation problem where generalisation error is minimised with respect to optimiser parameters. However, most existing optimiser learning methods are intuitively motivated, without clear theoretical support. We take a different perspective starting from mirror descent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 6 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?