2021
DOI: 10.48550/arxiv.2103.12828
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to Optimize: A Primer and A Benchmark

Tianlong Chen,
Xiaohan Chen,
Wuyang Chen
et al.

Abstract: Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods, aiming at reducing the laborious iterations of hand engineering. It automates the design of an optimization method based on its performance on a set of training problems. This data-driven procedure generates methods that can efficiently solve problems similar to those in the training. In sharp contrast, the typical and traditional designs of optimization methods are theory-driven, so they obtain … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
45
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(45 citation statements)
references
References 61 publications
0
45
0
Order By: Relevance
“…Learning to optimize. Learning to optimize (L2O) applies deep learning to learn from past optimization experience to optimize future problems more effectively and faster; see [Chen et al, 2021] for a survey. Model-free L2O uses recurrent neural networks to discover new optimizers suitable for similar problems [Andrychowicz et al, 2016, Li andMalik, 2016].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Learning to optimize. Learning to optimize (L2O) applies deep learning to learn from past optimization experience to optimize future problems more effectively and faster; see [Chen et al, 2021] for a survey. Model-free L2O uses recurrent neural networks to discover new optimizers suitable for similar problems [Andrychowicz et al, 2016, Li andMalik, 2016].…”
Section: Related Workmentioning
confidence: 99%
“…An advantage of our representation is that it can represent arbitrary number of solutions (Figure C.5) and can handle the case when the solution set is continuous (Figure C.2). This highlights the difference between our setup and that of existing L2O methods [Chen et al, 2021]: at test time, we do not need access to {f ω } ω∈Ω or their gradients, which can be costly to evaluate or unavailable; instead we only need ω (e.g. in the case of object detection, ω is an image).…”
Section: Extracting Solutionsmentioning
confidence: 99%
“…Learning to optimise. Learning to optimise (L2O) combines the flexible data-driven learning procedure and interpretable rule-based optimisation [8]. Algorithmic unrolling is one main approach of L2O [26].…”
Section: Related Workmentioning
confidence: 99%
“…To address the above limitations, we propose a novel functional learning framework to learn a mapping from node observations to the underlying graph topology with desired structural property. Our framework is inspired by the emerging field of learning to optimise (L2O) [8,26]. Specifically, as shown in Figure 1, we first unroll an iterative algorithm for solving the aforementioned regularised graph learning objective.…”
Section: Introductionmentioning
confidence: 99%
“…Machine learning techniques have been used to design optimization methods (e.g., [15], [16]). There are fewer works that develop embedded-ML methods specifically for distributed optimization algorithms.…”
Section: Introductionmentioning
confidence: 99%