2017
DOI: 10.1007/s10107-017-1159-y
|View full text |Cite
|
Sign up to set email alerts
|

Max-norm optimization for robust matrix recovery

Abstract: This paper studies the matrix completion problem under arbitrary sampling schemes. We propose a new estimator incorporating both max-norm and nuclear-norm regularization, based on which we can conduct efficient low-rank matrix recovery using a random subset of entries observed with additive noise under general non-uniform and unknown sampling distributions. This method significantly relaxes the uniform sampling assumption imposed for the widely used nuclear-norm penalized approach, and makes low-rank matrix re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
26
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(27 citation statements)
references
References 32 publications
1
26
0
Order By: Relevance
“…Therefore, their algorithm is only guaranteed to find a stationary point, and statistical properties of such solutions are difficult to analyze. Recently, Fang et al (2015b) proposed a scalable algorithm based on the alternating direction of multipliers method to efficiently solve the max-norm constrained optimization problem with guaranteed rate of convergence to the global optimum. In summary, the max-norm constrained empirical risk minimization problem can indeed be implemented in polynomial time as a function of the sample size and matrix dimensions.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, their algorithm is only guaranteed to find a stationary point, and statistical properties of such solutions are difficult to analyze. Recently, Fang et al (2015b) proposed a scalable algorithm based on the alternating direction of multipliers method to efficiently solve the max-norm constrained optimization problem with guaranteed rate of convergence to the global optimum. In summary, the max-norm constrained empirical risk minimization problem can indeed be implemented in polynomial time as a function of the sample size and matrix dimensions.…”
Section: Introductionmentioning
confidence: 99%
“…Passing the limit ν → ∞ to the both sides and using Lemma 3.2 yields ∞ j=k x j+1 −x j < ∞. By combining the last inequality with (12) and recalling the definition of c 1 (θ) in (18), we obtain 4m+3 4(m+1)…”
mentioning
confidence: 88%
“…and that the samples of the indices are drawn independently from a general sampling distribution Π = {π kl } k∈[n1],l∈[n2] on [n 1 ] × [n 2 ]. We adopt the same non-uniform sampling scheme as in [18], i.e., for each (k, l) ∈ [n 1 ] × [n 2 ], take π kl = p k p l with p k = 2p 0 if k ≤ n1 10 , p k = 4p 0 if n1 10 ≤ k ≤ n1 5 , otherwise p k = p 0 , where p 0 > 0 is a constant such that n1 k=1 p k = 1, and p l is defined in a similar way. The entries M it,jt with (i t , j t ) ∈ Ω for t = 1, 2, .…”
Section: Algorithm 2 (Nonmonotone Line Search Palm With Extrapolation)mentioning
confidence: 99%
“…The max-norm is an alternative convex relaxation for the rank in the low-complexity matrix completion problem, which is considered to be better than nuclear-norm in theoretical cases [52], [59]. The data missing spectral matrix completion based on max-norm can be formulated as the following constrained optimization problem:…”
Section: Max-norm Minimizationmentioning
confidence: 99%