2017
DOI: 10.1007/s10957-017-1108-1
|View full text |Cite
|
Sign up to set email alerts
|

Rate of Convergence of the Bundle Method

Abstract: We prove that the bundle method for nonsmooth optimization achieves solution accuracy ε in at most O ln(1/ε)/ε iterations, if the function is strongly convex. The result is true for the versions of the method with multiple cuts and with cut aggregation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(39 citation statements)
references
References 10 publications
0
39
0
Order By: Relevance
“…Unfortunately, all these aspects are only characterized experimentally; all convergence argumentsand efficiency estimates-on PBM hinge on extreme aggregation. The complexity estimate can actually be improved to-the still sublinear-O(log(1/ε)(1/ε)) with further assumptions on f (in particular, strong coercivity at the unique optimum) [28], but still the same bound holds forB i and for any arbitrarily large B i ; hence, the theoretical worst-case analysis seems unable to capture some important aspects of the practical behaviour of BM, (fortunately) substantially underestimating convergence speed. This is not helped by the fact that convergence arguments, as discussed in §2.1, deal with the sequence of SS and with sub-sequences of consecutive NS between two SS as two loosely related processes; after a SS is declared the algorithm can basically be restarted from scratch, as the arguments allow to completely change B then.…”
Section: Algorithmic Uses Of Dualitymentioning
confidence: 99%
“…Unfortunately, all these aspects are only characterized experimentally; all convergence argumentsand efficiency estimates-on PBM hinge on extreme aggregation. The complexity estimate can actually be improved to-the still sublinear-O(log(1/ε)(1/ε)) with further assumptions on f (in particular, strong coercivity at the unique optimum) [28], but still the same bound holds forB i and for any arbitrarily large B i ; hence, the theoretical worst-case analysis seems unable to capture some important aspects of the practical behaviour of BM, (fortunately) substantially underestimating convergence speed. This is not helped by the fact that convergence arguments, as discussed in §2.1, deal with the sequence of SS and with sub-sequences of consecutive NS between two SS as two loosely related processes; after a SS is declared the algorithm can basically be restarted from scratch, as the arguments allow to completely change B then.…”
Section: Algorithmic Uses Of Dualitymentioning
confidence: 99%
“…where the second inequality follows from the assumption of this case and the projection identity (23). Indeed, (b)-regularity yields…”
Section: Proof Of Theorem 22mentioning
confidence: 87%
“…Otherwise, the algorithm takes a "null step," which consists of using subgradient information to improve the model. Bundle methods often perform well in practice and their convergence/complexity theory is understood in several settings [23,25,33,38,39,47,54,64]. Most relevantly for this work, on sharp convex functions, variants of the bundle method converge superlinearly relative to the number of serious steps [50] and converge linearly relative to both serious and null steps [18].…”
Section: Introductionmentioning
confidence: 99%
“…However, relative to both null and serious steps, priors works analyzing the convergence of bundle methods (see, for example, Kiwiel (2000); Du and Ruszczynski (2017); Diaz and Grimmer (2021)) have only derived sublinear guarantees for global convergence-even when the objective is strongly convex. In contrast, we show in this paper that the Survey Descent iteration, at least in the case of a strongly convex, max-of-smooth function, achieves a local linear convergence rate.…”
Section: Relation To Bundle Methodsmentioning
confidence: 99%
“…Juditsky and Nemirovski (2011)). Within the well-studied realm of first-order bundle methods, in particular, theoretical guarantees have remained sublinear relative to the total number of null and serious steps (Kiwiel, 2000;Du and Ruszczynski, 2017;Diaz and Grimmer, 2021)-even in the presence of desirable properties such as δ-strongly convex objectives. For comparison, in the smooth setting, GD (and its projected and proximal variants) possesses well-recognized theoretical guarantees of linear convergence on L-smooth and δ-strongly convex objectives (Beck, 2017, Theorem 10.29).…”
Section: Linear Convergence and Nonsmooth Objectivesmentioning
confidence: 99%