2018 IEEE International Symposium on Information Theory (ISIT) 2018
DOI: 10.1109/isit.2018.8437794
|View full text |Cite
|
Sign up to set email alerts
|

An Explicit Convergence Rate for Nesterov's Method from SDP

Abstract: The framework of Integral Quadratic Constraints (IQC) introduced by Lessard et al. (2014) reduces the computation of upper bounds on the convergence rate of several optimization algorithms to semi-definite programming (SDP). In particular, this technique was applied to Nesterov's accelerated method (NAM). For quadratic functions, this SDP was explicitly solved leading to a new bound on the convergence rate of NAM, and for arbitrary strongly convex functions it was shown numerically that IQC can improve bounds… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…implies (33) for y ∈ p 2 . However, typically either only φ or the transformed operatorφ is bounded.…”
Section: Definition 6 (Doubly Hyperdominant Matrix) a Matrixmentioning
confidence: 98%
“…implies (33) for y ∈ p 2 . However, typically either only φ or the transformed operatorφ is bounded.…”
Section: Definition 6 (Doubly Hyperdominant Matrix) a Matrixmentioning
confidence: 98%
“…This also clarifies why various attempts to improve the rate by manual tuning [16] or sum-of-squares optimization [6] of the algorithm parameters were not successful. Our computation of an explicit optimal rate-bound for design is analogous to what has been achieved for the analysis of Nesterov's algorithm in [28]. Our approach brings out the intrinsic system theoretic reasons for the limits of performance in algorithm design; this holds for both the value of the optimal rate (determined by two zeros of some transfer matrix), and for the insight that algorithms (2.7) with matrices A of dimension larger than two are not beneficial.…”
Section: Convexification Of Operator Formulationmentioning
confidence: 97%
“…We then show in Section 4 how the special structure of the system can be exploited to convexify the common search for the algorithm parameters and the dynamic Zames-Falb multipliers which certify convergence. Our approach permits to derive explicit formulas for the optimal convergence rate that is achievable by synthesis, in analogy to the analysis results for Nesterov's algorithm in [28]. In this fashion, we are able to prove that the convergence rate of the triple momentum algorithm is indeed optimal if using the class of causal Zames-Falb multipliers to assure convergence.…”
mentioning
confidence: 95%
“…Combining ( 27) with (28), it follows that V (x k , x k+1 )+(f k −f ) is a Lyapunov function that certifies geometric convergence with rate ρ. As with the case M m,L , it is possible to further augment the storage function and use more supply rates, which can potentially yield less conservative upper bounds on the worst-case convergence rate ρ.…”
Section: This Bound Can Be Minimized If η =mentioning
confidence: 99%