2016
DOI: 10.1007/s10107-016-1009-3
|View full text |Cite
|
Sign up to set email alerts
|

Smooth strongly convex interpolation and exact worst-case performance of first-order methods

Abstract: We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex programs.Finding the worst-case performance of a black-box first-order method is formulated as an optimization problem over a set of smooth (strongly) convex functions and initial conditions. We develop closed-form necessary and sufficient conditions for smooth (strongly) convex interpolation, which provide a finite repre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
236
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 138 publications
(240 citation statements)
references
References 20 publications
4
236
0
Order By: Relevance
“…Figure 1 presents a comparison of the worst-case bounds in the case κ = 100 for GFOM, for an SSEP-based method performing no line searches, the celebrated fast (or accelerated) gradient method (FGM) for smooth strongly convex minimization [36,Theorem 2.1.12], and the very recent triple momentum method (TMM) [52]. These worst-case bounds were derived numerically by solving the corresponding PEPs using the interpolation conditions presented in Example 1 (see [13,50] for details on the derivation of PEPs for fixed-step methods). Note that the bound for the SSEP method was generated for the form (20); bounds for the efficient form (21) behave almost exactly like the bounds for the form (20)-the difference could not be observed from this plot-and were therefore omitted from the comparison.…”
Section: Ssep-based Gradient Methods For Smooth Strongly Convex Minimimentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 1 presents a comparison of the worst-case bounds in the case κ = 100 for GFOM, for an SSEP-based method performing no line searches, the celebrated fast (or accelerated) gradient method (FGM) for smooth strongly convex minimization [36,Theorem 2.1.12], and the very recent triple momentum method (TMM) [52]. These worst-case bounds were derived numerically by solving the corresponding PEPs using the interpolation conditions presented in Example 1 (see [13,50] for details on the derivation of PEPs for fixed-step methods). Note that the bound for the SSEP method was generated for the form (20); bounds for the efficient form (21) behave almost exactly like the bounds for the form (20)-the difference could not be observed from this plot-and were therefore omitted from the comparison.…”
Section: Ssep-based Gradient Methods For Smooth Strongly Convex Minimimentioning
confidence: 99%
“…Among them, a systematic approach to lower bounds (which focuses on quadratic cases) is presented in by Arjevani et al in [1], a systematic use of control theory (via integral quadratic constraints) for developing upper bounds is presented by Lessard et al in [26], and the performance estimation approach, which aims at finding worst-case bounds was originally developed in [13] (see also surveys in [11] and [47]). Those methodologies are mostly presented as tools for performing worst-cases analyses (see the numerous examples in [11,19,47,48,50,51]), however, such techniques were also recently used to develop new methods with improved worst-case complexities. Among others, such an approach was used in [13,22] to devise a fixed-step method that attains the best possible worst-case performance for smooth convex minimization [12], and later in [14] to obtain a variant of Kelley's cutting plane method with the best possible worst-case guarantee for non-smooth convex minimization.…”
Section: Links With Systematic and Computer-assisted Approaches To Womentioning
confidence: 99%
See 1 more Smart Citation
“…Now, we make a short inventory of the inequalities available to prove the different global convergence rates. Recent works on performance estimation of first-order methods (see [10,11]) guarantee that no other inequalities are needed in order to obtain the desired convergence results.…”
Section: Basic Inequalities Characterizing One Iteration Of Pgmmentioning
confidence: 99%
“…One such question was raised in [23], where the authors consider the worstcase performance sup θ∈Θ F θ (x n ) − F θ (x θ ) of gradient-based algorithms over the set of continuously differentiable functions with Lipschitz-continuous gradients, and with a uniform upper bound on the Lipschitz constants. Subsequent work along the same lines can be found in [31,48].…”
Section: Learning An Optimization Solvermentioning
confidence: 95%