1999
DOI: 10.1162/089976699300016728
|View full text |Cite
|
Sign up to set email alerts
|

Mixtures of Probabilistic Principal Component Analyzers

Abstract: Principal component analysis (PCA) is one of the most popular techniques for processing, compressing, and visualizing data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Therefore, previous attempts to formulate mixtu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
1,100
0
4

Year Published

2001
2001
2016
2016

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 1,583 publications
(1,105 citation statements)
references
References 17 publications
1
1,100
0
4
Order By: Relevance
“…The Mixtures of Principal Component Analyzers model from [30] and the Mixtures of Factor Analyzers model from [31] have in common with the MPGSM model that the mixtures all incorporate dimension reductions, either through PCA or Factor Analysis. However, the underlying model is different: in [30], [31] the low dimensional approximation error is considered to be Gaussian noise with a diagonal covariance matrix.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…The Mixtures of Principal Component Analyzers model from [30] and the Mixtures of Factor Analyzers model from [31] have in common with the MPGSM model that the mixtures all incorporate dimension reductions, either through PCA or Factor Analysis. However, the underlying model is different: in [30], [31] the low dimensional approximation error is considered to be Gaussian noise with a diagonal covariance matrix.…”
Section: Discussionmentioning
confidence: 99%
“…In general, q < d, such that we obtain a lower dimensional description of the observed signal vector. These models are sometimes also called generative [30], in the sense that a high-dimensional vector y j can be obtained by mapping a low-dimensional vector t j to a higher dimensional space, followed by adding a residual g j . In our application, we consider the following linear latent variable model:…”
Section: Latent Variable Models For Dimension Reductionmentioning
confidence: 99%
See 3 more Smart Citations