2021
DOI: 10.1007/s10208-020-09490-9
|View full text |Cite
|
Sign up to set email alerts
|

Low-Rank Matrix Recovery with Composite Optimization: Good Conditioning and Rapid Convergence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 41 publications
(42 citation statements)
references
References 60 publications
0
42
0
Order By: Relevance
“…In particular, [10] shows that f satisfies (A1) when (i) i , r i are i.i.d. standard Gaussian vectors, (ii) m rd, and (iii) at most a small constant fraction of entries of ξ are nonzero.…”
Section: Low-rank Matrix-sensing and Polyaksgmmentioning
confidence: 99%
See 3 more Smart Citations
“…In particular, [10] shows that f satisfies (A1) when (i) i , r i are i.i.d. standard Gaussian vectors, (ii) m rd, and (iii) at most a small constant fraction of entries of ξ are nonzero.…”
Section: Low-rank Matrix-sensing and Polyaksgmmentioning
confidence: 99%
“…Sharp growth is known to hold in a range of problems, most classically in feasibility formulations of linear programs [34] (see also the survey [57]). Several contemporary problems also exhibit sharp growth, for example, nonconvex formulations of low-rank matrix sensing and completion problems [10].…”
Section: Sharp Growthmentioning
confidence: 99%
See 2 more Smart Citations
“…Another line of works has focused on the development of fast nonconvex algorithms , Lee et al, 2018, Ma et al, 2018, Huang and Hand, 2018, Charisopoulos et al, 2019, which was largely motivated by recent advances in efficient nonconvex optimization for tackling statistical estimation problems [ Candes et al, 2015, Chen and Candès, 2017, Charisopoulos et al, 2021, Keshavan et al, 2009, Jain et al, 2013, Zhang et al, 2016, Chen and Wainwright, 2015, Sun and Luo, 2016, Zheng and Lafferty, 2016, Wang et al, 2017a, Cai et al, 2021b, Wang et al, 2017b, Qu et al, 2017, Duchi and Ruan, 2019, Ma et al, 2019 (see Chi et al [2019] for an overview). proposed a feasible nonconvex recipe by attempting to optimize a regularized squared loss (which includes extra penalty term to promote incoherence), and showed that in conjunction with proper initialization, nonconvex gradient descent converges to the ground truth in the absence of noise.…”
Section: Definementioning
confidence: 99%