2006
DOI: 10.1137/s0097539704442696
|View full text |Cite
|
Sign up to set email alerts
|

Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix

Abstract: In many applications, the data consist of (or may be naturally formulated as) an m × n matrix A. It is often of interest to find a low-rank approximation to A, i.e., an approximation D to the matrix A of rank not greater than a specified rank k, where k is much smaller than m and n. Methods such as the singular value decomposition (SVD) may be used to find an approximation to A which is the best in a well-defined sense. These methods require memory and time which are superlinear in m and n; for many applicatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
479
0
3

Year Published

2007
2007
2022
2022

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 450 publications
(484 citation statements)
references
References 22 publications
2
479
0
3
Order By: Relevance
“…, a m ), and can be computed in time O(min{mn 2 , m 2 n}) using Singular Value Decomposition (SVD). Some recent work on p = 2 case [1,2,3,4,5,9,12], initiated by a result due to Frieze, Kannan, and Vempala [7], has focused on algorithms for computing a k-dimensional subspace that gives (1 + )-approximation to the optimum in time O(mn·poly(k, 1/ )), i.e., linear in the number of co-ordinates we store. Most of these algorithms, with the exception of [1,12], depend on subroutines that sample poly(k, 1/ ) points from given a 1 , a 2 , .…”
Section: Introductionmentioning
confidence: 99%
“…, a m ), and can be computed in time O(min{mn 2 , m 2 n}) using Singular Value Decomposition (SVD). Some recent work on p = 2 case [1,2,3,4,5,9,12], initiated by a result due to Frieze, Kannan, and Vempala [7], has focused on algorithms for computing a k-dimensional subspace that gives (1 + )-approximation to the optimum in time O(mn·poly(k, 1/ )), i.e., linear in the number of co-ordinates we store. Most of these algorithms, with the exception of [1,12], depend on subroutines that sample poly(k, 1/ ) points from given a 1 , a 2 , .…”
Section: Introductionmentioning
confidence: 99%
“…One may have to be innovative in seeking appropriate || · || and f . Other linear algebraic functions are similarly of interest such as estimating eigenvalues, determinants, inverses, matrix multiplication and its applications to maxcut, clustering and other graph problems; see the tomes [77,78,79] for sampling solutions to many of these problems for the case when A is fully stored.…”
Section: Linear Algebramentioning
confidence: 99%
“…In many applications ranging from DNA microarray analysis, facial and object recognition, to web search models, we encounter the following problem (1): Given a large N × N matrix A, one wants to find the best approximation of A by a low-rank matrix D, i.e., min D∈R N×N ;rank ðDÞ≤k ‖A − D‖; [1] where the norm is usually taken as the spectral norm ‖ · ‖ 2 or the Frobenius norm ‖ · ‖ F . The solution to this problem is easily formulated in terms of the singular value decomposition (SVD) of A.…”
Section: Introductionmentioning
confidence: 99%