2010
DOI: 10.1137/080736417
|View full text |Cite
|
Sign up to set email alerts
|

A Randomized Algorithm for Principal Component Analysis

Abstract: Principal component analysis (PCA) requires the computation of a low-rank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rank-deficient approximation is at most a few digits (measured in the spectral norm, relative to the spectral norm of the matrix being approximated). In such circumstances, efficient algorithms have not come with guarantees of good accuracy, unless one or both dimensions of the matrix being approximated are small. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
335
1

Year Published

2010
2010
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 373 publications
(345 citation statements)
references
References 32 publications
2
335
1
Order By: Relevance
“…complexity of the coding step is thus similarly reduced when replacing (b) or (c) estimators in (21), but the latter option has a memory usage in O(n k). Although estimators (c) are slightly less performant in the first epochs, they are a good compromise between resource usage and convergence.…”
Section: All Others Estimators {Gmentioning
confidence: 86%
See 2 more Smart Citations
“…complexity of the coding step is thus similarly reduced when replacing (b) or (c) estimators in (21), but the latter option has a memory usage in O(n k). Although estimators (c) are slightly less performant in the first epochs, they are a good compromise between resource usage and convergence.…”
Section: All Others Estimators {Gmentioning
confidence: 86%
“…Solving (24) is made closer and closer to solving (21), to ensure the correctness of the algorithm (see Section IV). Yet, computing the estimators (b) is no more costly than computing (a) and still permits to speed up a single iteration close to r times.…”
Section: All Others Estimators {Gmentioning
confidence: 99%
See 1 more Smart Citation
“…The input matrices are randomly generated with size ranging from 1000 2 to 22000 2 and rank ranging from 100 to 1000. Figure 3(a) shows a performance comparison of our GPU and CPU implementations to Tygert SVD [10], which is a very fast approximate SVD algorithm that exploits random projection. Here we set the size of the test matrices to range from 1, 000 2 to 22, 000 2 , and the rank to range from 100 to 1000.…”
Section: Resultsmentioning
confidence: 99%
“…For testing and evaluation, we compared results of our GPU-based algorithm to the following three implementations: 1) a highly-optimized, multi-threaded CPU version of QUIC-SVD implemented using Intel Math Kernal Library; 2) MATLAB svds routine; and 3) the Tygert SVD [10], which is a fast CPU-based approximate SVD algorithm built upon random projection. In each test case, we plot the running time as well as the speedup over a range of matrix sizes and ranks.…”
Section: Resultsmentioning
confidence: 99%