2018
DOI: 10.1093/imaiai/iay003
|View full text |Cite
|
Sign up to set email alerts
|

The non-convex geometry of low-rank matrix optimization

Abstract: This work considers two popular minimization problems: (i) the minimization of a general convex function f (X) with the domain being positive semi-definite matrices; (ii) the minimization of a general convex function f (X) regularized by the matrix nuclear norm X * with the domain being general matrices. Despite their optimal statistical performance in the literature, these two optimization problems have a high computational complexity even when solved using tailored fast convex solvers. To develop faster and … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
82
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 65 publications
(84 citation statements)
references
References 34 publications
2
82
0
Order By: Relevance
“…The reader can refer to the review paper [8] for more details. Geometric landscape of related loss functions for low rank matrix recovery has been investigated in [18,19,28,4,32]. Similar results have also been established for nonconvex formulations of other problems, for example blind deconvolution [49], dictionary learning [34,35], tensor completion [1,20], phase synchronization [5,29,6], and deep neural networks [39,47,33,25].…”
Section: Introductionmentioning
confidence: 75%
“…The reader can refer to the review paper [8] for more details. Geometric landscape of related loss functions for low rank matrix recovery has been investigated in [18,19,28,4,32]. Similar results have also been established for nonconvex formulations of other problems, for example blind deconvolution [49], dictionary learning [34,35], tensor completion [1,20], phase synchronization [5,29,6], and deep neural networks [39,47,33,25].…”
Section: Introductionmentioning
confidence: 75%
“…In particular, [160] derived a model-free theory where no assumptions are imposed on M , which accommodates, for example, the noisy case and the case where the truth is only approximately low-rank. The global landscape of asymmetric matrix sensing / completion holds similarly by considering a loss function regularized by the term g(L, R) := L L − R R 2 F [159,173] or by the term g(L, R) := L 2 F + R 2 F [31]. Theorem 34 has been generalized to the asymmetric case in [40].…”
Section: Notesmentioning
confidence: 99%
“…Key to these results is certain error bound conditions, which elucidate the regularity properties of the underlying optimization problem. Recently, the above results have been extended to cover general smooth low-rank matrix optimization problems whose objective functions satisfy the restricted strong convexity and smoothness properties [29,52,53].…”
Section: Relatedmentioning
confidence: 99%