2018
DOI: 10.48550/arxiv.1803.07554
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Leave-one-out Approach for Matrix Completion: Primal and Dual Analysis

Abstract: In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, entrywise bounds for iterative stochastic procedures in the presence of probabilistic dependency. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: (i) the non-convex approach based on Projected Gradient Descent (PGD) for a rank-constra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 35 publications
0
15
0
Order By: Relevance
“…It has been proved that with high probability X is the unique solution to (Matrix-Completion) [DC18]. Hence the matrix…”
Section: Linear Independence and Uniqueness Of Primalmentioning
confidence: 99%
See 1 more Smart Citation
“…It has been proved that with high probability X is the unique solution to (Matrix-Completion) [DC18]. Hence the matrix…”
Section: Linear Independence and Uniqueness Of Primalmentioning
confidence: 99%
“…The matrix Y 0 is actually a dual certificate for X for (Matrix-Completion). We follow the construction procedure in [DC18]. First set k 0 : = C 0 log(µr ) for some large enough numerical constant C 0 .…”
Section: B1 Proof Of Lemmamentioning
confidence: 99%
“…, where i ∈ [K] is chosen to satisfy (17). It follows directly from the definition of W that W F = 1.…”
Section: Upper Bound For Blind Deconvolutionmentioning
confidence: 99%
“…For the case of very small ranks this result can be refined further[17]. Namely one can remove one of the two log-factors at the cost of an r 3 -dependence on the rank.…”
mentioning
confidence: 99%
“…There are iterative methods, for specific problems, that provably converge to minimizers [9,12,[17][18][19]. Analysis of these methods are of two types: those based on studying the iterate sequence [20][21][22], and those based on characterizing the landscape of smooth loss functions [17,23,24].…”
Section: Introductionmentioning
confidence: 99%