2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2015
DOI: 10.1109/allerton.2015.7447128
|View full text |Cite
|
Sign up to set email alerts
|

A characterization of deterministic sampling patterns for low-rank matrix completion

Abstract: Low-rank matrix completion (LRMC) problems arise in a wide variety of applications. Previous theory mainly provides conditions for completion under missing-at-random samplings. This paper studies deterministic conditions for completion. An incomplete d × N matrix is finitely rank-r completable if there are at most finitely many rank-r matrices that agree with all its observed entries. Finite completability is the tipping point in LRMC, as a few additional samples of a finitely completable matrix guarantee its … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
112
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 29 publications
(115 citation statements)
references
References 9 publications
3
112
0
Order By: Relevance
“…Unfortunately none of these algorithms succeeded at this task. This supports theoretical results, which show that even when non-adaptive LRMC is theoretically possible using only the minimum required r + 1 samples per column, this may be computationally prohibitive [24].…”
Section: Methodssupporting
confidence: 89%
See 2 more Smart Citations
“…Unfortunately none of these algorithms succeeded at this task. This supports theoretical results, which show that even when non-adaptive LRMC is theoretically possible using only the minimum required r + 1 samples per column, this may be computationally prohibitive [24].…”
Section: Methodssupporting
confidence: 89%
“…In addition, Algorithm 1 operates with as little as r + 1 samples per column (the minimum required, as X is rank-r, so observing columns with at least r + 1 entries is necessary for completion [24]). Therefore, Algorithm 1 works even on the minimal sampling regime.…”
Section: What We Gain By Being Adaptivementioning
confidence: 99%
See 1 more Smart Citation
“…, x G in X . In other words, given a dataset of N incomplete measurements {A gi x i } N i=1 , it is possible to build X by trying all the possible combinations of G samples 5 and keeping only the points v which are the solutions of (14).…”
Section: Uniqueness Of Low-dimensional Modelsmentioning
confidence: 99%
“…Proof. If R Ag = R A g for all g = g , then the system in (14) will have multiple solutions for any choice of x 1 , . .…”
Section: Uniqueness Of Low-dimensional Modelsmentioning
confidence: 99%