2022
DOI: 10.48550/arxiv.2202.09107
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparison of an Apocalypse-Free and an Apocalypse-Prone First-Order Low-Rank Optimization Algorithm

Abstract: We compare two first-order low-rank optimization algorithms, namely P 2 GD (Schneider and Uschmajew, 2015), which has been proven to be apocalypse-prone (Levin et al., 2021), and its apocalypse-free version P 2 GDR obtained by equipping P 2 GD with a suitable rank reduction mechanism (Olikier et al., 2022). Here an apocalypse refers to the situation where the stationarity measure goes to zero along a convergent sequence whereas it is nonzero at the limit. The comparison is conducted on two simple examples of a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…This paper gathers, expands, and generalizes results of the technical reports [42] and [43]. P 2 GDR on R m×n ≤r (Algorithm 5.3 using Algorithm 6.3 in line 3) was proposed in [42] in response to a question raised in [33, §4]: "Is there an algorithm running directly on R m×n ≤r that only uses first-order information about the cost function and which is guaranteed to converge to a stationary point?"…”
Section: Introductionmentioning
confidence: 92%
See 1 more Smart Citation
“…This paper gathers, expands, and generalizes results of the technical reports [42] and [43]. P 2 GDR on R m×n ≤r (Algorithm 5.3 using Algorithm 6.3 in line 3) was proposed in [42] in response to a question raised in [33, §4]: "Is there an algorithm running directly on R m×n ≤r that only uses first-order information about the cost function and which is guaranteed to converge to a stationary point?"…”
Section: Introductionmentioning
confidence: 92%
“…The inclusion holds because, for all z ∈ B[x, ᾱ s(x, f, C)], z − x ≤ z − x + x − x ≤ ᾱ s(x, f, C) + 2ε(x) ≤ ᾱ ∇f (x) + ∆ ≤ ρ(x),where the last inequality follows from(42). Therefore, for all y ∈ RFDR(x; E, C, f, α, ᾱ, β, c, ∆),f (y) ≤ f (x R ) ≤ f (x) − cµ 2 s(x; f, C) 2 min α, 2β 1 − c Lip B[x,ρ(x)] (∇f ) ≤ f (x) − cµ 2 (x) 2 min α, 2β 1 − c Lip B[x,ρ(x)] (∇f ) ≤ f (x) − 3δ(x) ≤ f (x) − δ(x),where the second inequality follows from Corollary 7.7, the third from condition 2 of Assumption 3.1, the fourth from (42) and(41), and the fifth from(43). Let us now consider the case wherex ∈ C \ S p .…”
mentioning
confidence: 99%