2021
DOI: 10.1109/lsp.2020.3044130
|View full text |Cite
|
Sign up to set email alerts
|

Rapid Robust Principal Component Analysis: CUR Accelerated Inexact Low Rank Estimation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 19 publications
0
22
0
Order By: Relevance
“…One of our future directions is to study learning-based stochastic approach. While this paper focuses on deterministic RPCA approaches, the stochastic RPCA approaches, e.g., partialGD [8], PG-RMC [9], and IRCUR [11], have shown promising speed advantage, especially for large-scale problems. Another future direction is to explore robust tensor decomposition incorporate with deep learning, as some preliminary studies have shown the advantages of tensor structure in certain machine learning tasks [45,46].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…One of our future directions is to study learning-based stochastic approach. While this paper focuses on deterministic RPCA approaches, the stochastic RPCA approaches, e.g., partialGD [8], PG-RMC [9], and IRCUR [11], have shown promising speed advantage, especially for large-scale problems. Another future direction is to explore robust tensor decomposition incorporate with deep learning, as some preliminary studies have shown the advantages of tensor structure in certain machine learning tasks [45,46].…”
Section: Discussionmentioning
confidence: 99%
“…For the huge-scale problems where even a single truncated SVD is computationally prohibitive, one may replace the SVD step in initialization by some batch-based low-rank approximations, e.g., CUR decomposition. While its stability lacks theoretical support when outliers appear, the empirical evidence shows that a single CUR decomposition can provide sufficiently good initialization [11].…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…When the goal is to estimate the r leading eigenvalues and eigenvectors of the kernel matrix, the number of landmarks m should be greater than or equal to r. Recent empirical and theoretical results show that increasing the number of landmarks m and restricting the Nyström approximation to a lower rank-r space will lead to producing higher quality estimates [11], [15], [29], [38]. To explain the rank reduction procedure, we first compute the thin QR decomposition of the matrix C to find a factorization in the form of C = QR, where Q ∈ R n×m , Q T Q = I m , and R ∈ R m×m is an upper triangular matrix [39]. The second step is to compute the eigenvalue decomposition of the matrix B := RW † R T ∈ R m×m , which takes O(m 3 ) time and the number of landmarks m is assumed to be independent of n. Let B = VΣV T be the spectral decomposition of B, where the diagonal matrix Σ ∈ R m×m contains the eigenvalues of B in nonincreasing order and the columns of V ∈ R m×m are the eigenvectors.…”
Section: Preliminaries and Landmark Selection Techniquesmentioning
confidence: 99%
“…As the sparse coding and the summation all happen in the original domain, there is no additional transform-related computational burden per query in LSDAT. The most efficient computational complexity corresponding to RPCA is obtained by accelerated alternating projections algorithm (IRCUR) [10] which is (for…”
Section: Complexity and Convergence Rate Analysismentioning
confidence: 99%