2006
DOI: 10.1007/11830924_28
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Sampling and Fast Low-Rank Matrix Approximation

Abstract: We prove that any real matrix A contains a subset of at most 4k/ + 2k log(k + 1) rows whose span "contains" a matrix of rank at most k with error only (1 + ) times the error of the best rank-k approximation of A. We complement it with an almost matching lower bound by constructing matrices where the span of any k/2 rows does not "contain" a relative (1 + )-approximation of rank k. Our existence result leads to an algorithm that finds such rank-k approximation in timei.e., essentially O(Mk/ ), where M is the nu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
192
0

Year Published

2006
2006
2024
2024

Publication Types

Select...
6
3
1

Relationship

1
9

Authors

Journals

citations
Cited by 136 publications
(196 citation statements)
references
References 13 publications
4
192
0
Order By: Relevance
“…, a m ), and can be computed in time O(min{mn 2 , m 2 n}) using Singular Value Decomposition (SVD). Some recent work on p = 2 case [1,2,3,4,5,9,12], initiated by a result due to Frieze, Kannan, and Vempala [7], has focused on algorithms for computing a k-dimensional subspace that gives (1 + )-approximation to the optimum in time O(mn·poly(k, 1/ )), i.e., linear in the number of co-ordinates we store. Most of these algorithms, with the exception of [1,12], depend on subroutines that sample poly(k, 1/ ) points from given a 1 , a 2 , .…”
Section: Introductionmentioning
confidence: 99%
“…, a m ), and can be computed in time O(min{mn 2 , m 2 n}) using Singular Value Decomposition (SVD). Some recent work on p = 2 case [1,2,3,4,5,9,12], initiated by a result due to Frieze, Kannan, and Vempala [7], has focused on algorithms for computing a k-dimensional subspace that gives (1 + )-approximation to the optimum in time O(mn·poly(k, 1/ )), i.e., linear in the number of co-ordinates we store. Most of these algorithms, with the exception of [1,12], depend on subroutines that sample poly(k, 1/ ) points from given a 1 , a 2 , .…”
Section: Introductionmentioning
confidence: 99%
“…That is, one needs to compute the core tensor S appearing in (1.1) accordingly. The papers [1,5,7,12,14,19,22,30,34,36,37] essentially choose S in a particular way.…”
Section: 2)mentioning
confidence: 99%
“…Specifically, they show that there exists k columns with which one can get a √ k + 1 relative error approximation in Frobenius norm, which is tight. Later, Deshpande and Vempala [6] provides an algorithm with two steps which yields a relative approximation in expectation: first, approximate the "volume sampling" introduced in [5] by successively choosing one column at each step with carefully chosen probabilities; then, choose…”
Section: Comparison To Related Workmentioning
confidence: 99%