2012 19th IEEE International Conference on Image Processing 2012
DOI: 10.1109/icip.2012.6467015
|View full text |Cite
|
Sign up to set email alerts
|

Invariance of principal components under low-dimensional random projection of the data

Abstract: Algorithms that can efficiently recover principal components of high-dimensional data from compressive sensing measurements (e.g. low-dimensional random projections) of it have been an important topic of recent interest in the literature. In this paper, we show that, under certain conditions, normal principal component analysis (PCA) on such low-dimensional random projections of data actually returns the same result as PCA on the original data set would. In particular, as the number of data samples increases, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
49
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(53 citation statements)
references
References 7 publications
4
49
0
Order By: Relevance
“…Results are in line with the findings of Qi and Hughes (2012), where it is theoretically verified that, although RP disperses the energy of a PC in different directions, the original PC remains as the direction with the most energy. Due to this, oscillations with similar variance can be assigned to different, adjacent components, leading to some ambiguity in the indices.…”
Section: Datasupporting
confidence: 90%
See 2 more Smart Citations
“…Results are in line with the findings of Qi and Hughes (2012), where it is theoretically verified that, although RP disperses the energy of a PC in different directions, the original PC remains as the direction with the most energy. Due to this, oscillations with similar variance can be assigned to different, adjacent components, leading to some ambiguity in the indices.…”
Section: Datasupporting
confidence: 90%
“…With a subspace of 10% of the original dimensions, we were able to recover the PCs explaining 96% of the variance in the original data set and with 1% we still could recover the PCs explaining 94% of the original variance. The findings of this work are supported by the results presented in Qi and Hughes (2012). In their paper, it is theoretically and experimentally shown that a normal PCA performed on low-dimensional random subspaces recovers the principal components of the original data set very well, and as the number of data samples n increases the principal components of the random subspace converge to the true original components.…”
Section: Discussionsupporting
confidence: 75%
See 1 more Smart Citation
“…Sparse approximation applied to semantic hierarchies [41] has been shown to be efficient in categorizing between large numbers of classes [35]. Ideas from compressed sensing have also been applied to dimensionality reduction tools to develop a theory of sketched SVD based on randomly projected data [23,47,29]. Methods to promote sparsity and parsimonious representations of high-dimensional data from few measurements include the development of sparse PCA (SPCA) [64] and, in a related line of work in feature selection for classification, penalized and sparse LDA [60,16].…”
Section: Related Workmentioning
confidence: 99%
“…It is convenient to use Parks-McClellan optimal digital filter an applicable narrow and steep transitional band-pass filter [3]. The filter designed by this algorithm, however, has a higher order, implying that the execution on hardware is complex, which, as a result, will lead to a higher power consumption of the hardware system.…”
Section: Introductionmentioning
confidence: 99%