Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing 2017
DOI: 10.1145/3055399.3055431
|View full text |Cite
|
Sign up to set email alerts
|

Low rank approximation with entrywise l 1 -norm error

Abstract: We study the 1 -low rank approximation problem, where for a given n × d matrix A and approximation factor α ≥ 1, the goal is to output a rank-k matrix A for whichwhere for an n × d matrix C, we let C 1 = n i=1 d j=1 |C i,j |. This error measure is known to be more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. The problem was shown to be NP-hard by Gillis and Vavasis and a number of heuristics have been proposed. It w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
110
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(113 citation statements)
references
References 65 publications
3
110
0
Order By: Relevance
“…The solution V to min V SU * V − SA 1,2 is a √ r-approximation to the original problem min V SU * V − SA 1 . In the · 1,2 norm, the solution V can be written in terms of the so-called normal equations for regression, namely, V = (SU * ) † SA, where C † denotes the Moore-Penrose pseudoinverse of C. The key property exploited in [67] is then that although we do not know U * , (SU * ) † SA is a k-dimensional subspace in the row span of SA providing a √ r-approximation, and one does know SA. This line of reasoning ultimately leads to a poly(k)-approximation.…”
Section: Algorithms For 0 < P <mentioning
confidence: 99%
See 1 more Smart Citation
“…The solution V to min V SU * V − SA 1,2 is a √ r-approximation to the original problem min V SU * V − SA 1 . In the · 1,2 norm, the solution V can be written in terms of the so-called normal equations for regression, namely, V = (SU * ) † SA, where C † denotes the Moore-Penrose pseudoinverse of C. The key property exploited in [67] is then that although we do not know U * , (SU * ) † SA is a k-dimensional subspace in the row span of SA providing a √ r-approximation, and one does know SA. This line of reasoning ultimately leads to a poly(k)-approximation.…”
Section: Algorithms For 0 < P <mentioning
confidence: 99%
“…There is also related work on robust PCA [15,17,55,56,72,74] and measures which minimize the sum of Euclidean norms of rows [20,[23][24][25]65], though neither directly gives an algorithm for 1 -low rank approximation. Song et al [67] gave the first approximation algorithms with provable guarantees for entrywise p -low rank approximation for p ∈ [1,2). Their algorithm provides a poly(k log n) approximation and runs in polynomial time, that is, the algorithm outputs a matrix B for which A − B p ≤ poly(k log n) min rank-k A A−A p .…”
Section: Introductionmentioning
confidence: 99%
“…In particular, minimizing the entry-wise l 1 or l 0 norms is expected to improve the robustness of the estimation, but unfortunately the problem formulated in these norms turns out to be NP-hard. See, e.g., Gillis (2018), Song (2017), and Bringmann (2017). For the more general case of entrywise l p norms see Chierichetti (2017).…”
Section: Previous Work and The Current State Of The Artmentioning
confidence: 99%
“…After the publication of an earlier of this paper, Song, Woodruff and Zhong proposed several approximation algorithms for ℓ 1 -LRA [36], addressing the second part of the open question 2 in [38]. In particular, they showed that it is possible to achieve an approximation factor α = (log n) · poly(r) in nnz(M ) + (m + n)poly(r) time, where nnz(M ) denotes the number of non-zero entries of M .…”
Section: Discussionmentioning
confidence: 99%