2016
DOI: 10.1137/15m1050525
|View full text |Cite
|
Sign up to set email alerts
|

Guarantees of Riemannian Optimization for Low Rank Matrix Recovery

Abstract: We establish theoretical recovery guarantees of a family of Riemannian optimization algorithms for low rank matrix recovery, which is about recovering an m × n rank r matrix from p < mn number of linear measurements. The algorithms are first interpreted as iterative hard thresholding algorithms with subspace projections. Based on this connection, we show that provided the restricted isometry constant R 3r of the sensing operator is less than C κ / √ r, the Riemannian gradient descent algorithm and a restarted … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
101
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 121 publications
(109 citation statements)
references
References 54 publications
(128 reference statements)
1
101
0
1
Order By: Relevance
“…(SVP) [7], normalized iterative hard thresholding [8], and Riemannian gradient descent (RGrad) [21]. We compare these algorithms under the same settings for 100 times, and the final results are averaged over all the comparisons.…”
Section: ) Performance Comparisonsmentioning
confidence: 99%
See 2 more Smart Citations
“…(SVP) [7], normalized iterative hard thresholding [8], and Riemannian gradient descent (RGrad) [21]. We compare these algorithms under the same settings for 100 times, and the final results are averaged over all the comparisons.…”
Section: ) Performance Comparisonsmentioning
confidence: 99%
“…These algorithms involve a projection step which projects a matrix into a low-rank space using truncated SVD. In [21], a Riemannian method, termed RGrad, was proposed to extend NIHT by projecting the search direction of gradient descent into a low dimensional space. Compared with the alternation minimization method, these IHT-based algorithms exhibit better convergence performance with lower computational complexity.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We will not discuss the approximate accuracy of Z 0 to X here; see [104,84,21] for related results. Starting from the spectral initialization, a sufficiently close initial guess can be constructed for PGD, and then exact recovery guarantee can be established.…”
Section: Recovery Guarantees Of Pgdmentioning
confidence: 99%
“…where the new search P k is a weighted sum of the gradient descent direction G k and the previous search direction P k´1 . Several choices of the combination weight are available [104,100]. In each iteration, the Riemannian conjugate gradient descent algorithm has the same dominant computational cost as RGrad but with substantially faster convergence rate.…”
Section: Riemannian Optimization On Low Rank Manifoldmentioning
confidence: 99%