2016
DOI: 10.1214/15-aos1369
|View full text |Cite
|
Sign up to set email alerts
|

Statistical and computational trade-offs in estimation of sparse principal components

Abstract: In recent years, sparse principal component analysis has emerged as an extremely popular dimension reduction technique for highdimensional data. The theoretical challenge, in the simplest case, is to estimate the leading eigenvector of a population covariance matrix under the assumption that this eigenvector is sparse. An impressive range of estimators have been proposed; some of these are fast to compute, while others are known to achieve the minimax optimal rate over certain Gaussian or sub-Gaussian classes.… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
104
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 95 publications
(105 citation statements)
references
References 61 publications
1
104
0
Order By: Relevance
“…() in the context of the lasso in high dimensional linear models, or Johnstone and Lu (), or Wang et al . () in the context of sparse principal component analysis.…”
Section: Data‐driven Projection Estimator For a Single Change Pointmentioning
confidence: 99%
“…() in the context of the lasso in high dimensional linear models, or Johnstone and Lu (), or Wang et al . () in the context of sparse principal component analysis.…”
Section: Data‐driven Projection Estimator For a Single Change Pointmentioning
confidence: 99%
“…(), Ma () and Wang et al . ()). To give a flavour of these results, let Vn denote the set of all estimators of v 1 , i.e.…”
Section: Introductionmentioning
confidence: 94%
“…As an alternative method, in Figs (a)–(d), we also present the corresponding results for the variants of Wang et al . () of the semidefinite programming algorithm that was introduced by d’Aspremont et al . ().…”
Section: Introductionmentioning
confidence: 99%
“…The main questions needed to be answered in sparse PCA is whether there has an algorithm not only asymptotically consistent but also computationally efficient. Theoretical research from statistical guarantees view of sparse PCA includes consistency [2,8,14,38,41,50,53,55], minimax risk bounds for estimating eigenvectors [40,[42][43]45,61], optimal sparsity level detection [4,44,48,59] and principal subspaces estimation [5,[15][16]36,9,40,51,57] have been established under various statistical models. Because most of the methods based on spiked covariance model, so we firstly given an introduction about spiked variance model and then give a high dimensional sparse PCA theoretical analysis review from above several aspects.…”
Section: Theoretical Analysis Of High-dimensional Sparse Pcamentioning
confidence: 99%
“…They established minimax rates for estimation under 2 l loss with q l -penalized estimators with suitably model parameters. Wang et al [59] considered the question of whether it is possible to find an estimator of 1 u that is computable in polynomial time, and it attained the minimax optimal rate of convergence 1 u . They showed that no randomized polynomial time algorithm can achieve the minimax optimal rate.…”
mentioning
confidence: 99%