2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2012
DOI: 10.1109/icassp.2012.6288022
|View full text |Cite
|
Sign up to set email alerts
|

K-MLE: A fast algorithm for learning statistical mixture models

Abstract: We describe k-MLE, a fast and efficient local search algorithm for learning finite statistical mixtures of exponential families such as Gaussian mixture models. Mixture models are traditionally learned using the expectation-maximization (EM) soft clustering technique that monotonically increases the incomplete (expected complete) likelihood. Given prescribed mixture weights, the hard clustering k-MLE algorithm iteratively assigns data to the most likely weighted component and update the component models using … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 39 publications
0
12
0
Order By: Relevance
“…will be used to seek the final group tracklet, {G}. EM [31], [32] is a frequentative method to obtain maximum probability estimates.…”
Section: Deduce the People's Interaction In A Tracklet Clustersmentioning
confidence: 99%
“…will be used to seek the final group tracklet, {G}. EM [31], [32] is a frequentative method to obtain maximum probability estimates.…”
Section: Deduce the People's Interaction In A Tracklet Clustersmentioning
confidence: 99%
“…This approach is limited to small dimensions. (Recently, additively-weighted Bregman Voronoi diagrams have also been used to learn mixtures of exponential families [10].) -Use non-metric tree search structures like Bregman ball trees [11] or Bregman vantage point trees [12] that can be straightforwardly extended by taking into account a weight on each point.…”
Section: Map Decision Rule and Additive Bregman Voronoi Diagramsmentioning
confidence: 99%
“…In [1], the k-MLE algorithm is described with the Lloyd's method : Assign all observations to their closest cluster, update clusters' parameters and so on until convergence on parameters first and then on the complete likelihood. This method can produce empty clusters (especially when the number of clusters and the data dimension are large).…”
Section: K-mle For Mixtures Of Wishart Distributionsmentioning
confidence: 99%
“…As remarked in [1], each component of the mixture may have its own generator F * j . It is of particular interest when the number of observations in X i is known.…”
Section: Remarks For Mixtures Of Wishart Distributionsmentioning
confidence: 99%
See 1 more Smart Citation