1999
DOI: 10.1111/1467-9868.00196
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic Principal Component Analysis

Abstract: Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based on a probability model. We demonstrate how the principal axes of a set of observed data vectors may be determined through maximum likelihood estimation of parameters in a latent variable model that is closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss, wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
2,196
0
9

Year Published

2000
2000
2016
2016

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 2,921 publications
(2,214 citation statements)
references
References 38 publications
9
2,196
0
9
Order By: Relevance
“…Then, the PCs for each individual in the validation sample are predicted using the probabilistic PCA 15 (PPCA). The predicted PCs are readily compared with those for different samples, or can be superimposed onto the PCs of the reference sample in the same metric space.…”
Section: Methodsmentioning
confidence: 99%
“…Then, the PCs for each individual in the validation sample are predicted using the probabilistic PCA 15 (PPCA). The predicted PCs are readily compared with those for different samples, or can be superimposed onto the PCs of the reference sample in the same metric space.…”
Section: Methodsmentioning
confidence: 99%
“…Hence, an iterative algorithm to compute C is desirable. Roweis (1998) and Tipping and Bishop (1999) proposed a view where the observed data is the projection of lower-dimensional underlying latent data. Specifically, we let Y = CX + e (35) where Y ~ «V(0, CC T + R) e R DxT is the observed data, X ~ M (0,1) G R PxT is the latent data set, and e ~ ,M(0, R) is the observation noise.…”
Section: Distributed Pca As An Expectation Maximization Algorithmmentioning
confidence: 99%
“…Based on this probabilistic latent variable model, Roweis (1998) and Tipping and Bishop (1999) also showed an iterative EM algorithm to obtain the principal components, without explicitly computing the sample covariance matrix. In each iteration, the E-step projects the data onto a lower dimensional subspace, while the M-step seeks for an update of that subspace which could minimize the mean square distance between the original and the projected data (i.e.…”
Section: Distributed Pca As An Expectation Maximization Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…The method is based on the EM algorithm [19,21], which enables the calculation of the eigenspaces, i.e., maximum likelihood solution of PCA, in the case of missing data. The fact that we can calculate the PCA on a subset of pixels in the input images, makes it possible to remove the outliers and treat them as missing pixels, arriving at a robust PCA representation.…”
Section: Introductionmentioning
confidence: 99%