2015
DOI: 10.48550/arxiv.1510.08234
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

From error bounds to the complexity of first-order descent methods for convex functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
22
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(23 citation statements)
references
References 0 publications
1
22
0
Order By: Relevance
“…In [16,17,18] the KL inequality was used to derive convergence rates of descent-type first order methods. The KL inequality was used to study convex optimization problems in [19,20].…”
Section: Introductionmentioning
confidence: 99%
“…In [16,17,18] the KL inequality was used to derive convergence rates of descent-type first order methods. The KL inequality was used to study convex optimization problems in [19,20].…”
Section: Introductionmentioning
confidence: 99%
“…) for the denominator in (11). Now let 1 < α < 3 2 and put A = {(x, |x| α ) : x ∈ R}, so that A is above B and touches it at the origin.…”
Section: Slowly Shrinking Reachmentioning
confidence: 99%
“…Since A, B are convex, this is not surprising, as here linear convergence would require A, B to intersect at an angle, and not tangentially. For the Łojasiewicz exponent of i A + 1 2 d 2 B in the convex case see also [11], and for general considerations as to obtaining optimal θ see [17]. The corresponding global convergence theorem for the case r * = 0 is obtained in the same way using using the sets A s , B s and [30, Theorem 1], which leads to the rate b…”
Section: Convergencementioning
confidence: 99%
“…Although it is hardly comprehensive, one example is noteworthy: property (2.1) holds if S = A * • ∇f • A for a strongly convex function f and a matrix A [25, p. 287]. See [7,38,24] for information on convex error bounds.…”
Section: Assumptions and Notationmentioning
confidence: 99%
“…The essential strong quasi-monotonicity property (2.1) appears to be the weakest possible condition under which a first-order algorithm will converge linearly. This deep, difficult to characterize property is related to the Hoffman bound [18], the linear regularity assumption, and the Kurdyka-Lojasiewicz property [7]. We look forward to a calculus of operations that preserve this property.…”
Section: ⊓ ⊔mentioning
confidence: 99%