2008
DOI: 10.1109/tsp.2007.907912
|View full text |Cite
|
Sign up to set email alerts
|

Signal Modeling and Classification Using a Robust Latent Space Model Based on $t$ Distributions

Abstract: Factor analysis is a statistical covariance modeling technique based on the assumption of normally distributed data. A mixture of factor analyzers can be hence viewed as a special case of Gaussian (normal) mixture models providing a mathematically sound framework for attribute space dimensionality reduction. A significant shortcoming of mixtures of factor analyzers is the vulnerability of normal distributions to outliers. Recently, the replacement of normal distributions with the heavier-tailed Student's-t dis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 44 publications
(36 citation statements)
references
References 26 publications
0
36
0
Order By: Relevance
“…Optimal model size (order) selection for finite mixture models is an important but very difficult problem which has not been completely resolved. Usually, penalized likelihood-based or entropy-based criteria are used for this purpose [20], such as the Bayesian information criterion (BIC) of Schwarz [19], and variants [22].…”
Section: Gaussian Mixture Regression For Robot Learning By Demonstrationmentioning
confidence: 99%
See 4 more Smart Citations
“…Optimal model size (order) selection for finite mixture models is an important but very difficult problem which has not been completely resolved. Usually, penalized likelihood-based or entropy-based criteria are used for this purpose [20], such as the Bayesian information criterion (BIC) of Schwarz [19], and variants [22].…”
Section: Gaussian Mixture Regression For Robot Learning By Demonstrationmentioning
confidence: 99%
“…Hence, by maximizing 9 this lower bound L(q) (variational free energy) so that it becomes as tight as possible, not only do we minimize the KL-divergence between the true and the variational posterior, but we also implicitly integrate out the unknowns W . Due to the considered conjugate prior configuration of the DPGMR model, the variational posterior q(W ) is expected to take the same functional form as the prior, p(W ) [22]; thus, it is expected to factorize as…”
Section: Inference For the Dpgmr Modelmentioning
confidence: 99%
See 3 more Smart Citations