2005
DOI: 10.1007/11573036_33
|View full text |Cite
|
Sign up to set email alerts
|

Gossip-Based Greedy Gaussian Mixture Learning

Abstract: It has been recently demonstrated that the classical EM algorithm for learning Gaussian mixture models can be successfully implemented in a decentralized manner by resorting to gossip-based randomized distributed protocols. In this paper we describe a gossip-based implementation of an alternative algorithm for learning Gaussian mixtures in which components are added to the mixture one after another. Our new Greedy Gossip-based Gaussian mixture learning algorithm uses gossip-based parallel search, starting from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
51
0
1

Year Published

2006
2006
2010
2010

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(52 citation statements)
references
References 14 publications
0
51
0
1
Order By: Relevance
“…Future research could e.g. investigate the effect of the recently developed greedy mixture learning [47], [48], where starting from one component, a new component is iteratively added and the complete mixture is updated.…”
Section: Discussionmentioning
confidence: 99%
“…Future research could e.g. investigate the effect of the recently developed greedy mixture learning [47], [48], where starting from one component, a new component is iteratively added and the complete mixture is updated.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the maximum likelihood of the mixture can be determined by adding iteratively a new component to the mixture. In this chapter, the greedy EM Verbeek et al (2003); Vlassis & Likas (2002) algorithm for learning GMM is used since it is able to find the global likelihood maxima and to estimate the unknown number of the mixture components. This algorithm can be summarized as follows.…”
Section: Gaussian Mixture Models Estimatormentioning
confidence: 99%
“…Hence, a global search is required. One way pointed in Vlassis & Likas (2002) proposes to use all the points as initial candidates of the sought component. Every point is the mean of a corresponding candidate (m T+1 = x p ) with the same covariance matrix σ 2 I, where σ is set according to Weston & Watkins (1999).…”
Section: Gaussian Mixture Models Estimatormentioning
confidence: 99%
“…Experiments have shown that in some cases there is a significant dependence on initializing model parameters especially on the regression parameters β jk . A possible solution is to design an incremental procedure for learning a regression mixture model by adopting successful schemes that have already been presented in the case of classical mixture models [15]. Finally, we are planning to study the performance of the proposed methodology and its extensions in computer vision applications, such as visual tracking problems and object detection in a video surveillance domain [16], [17].…”
Section: Discussionmentioning
confidence: 99%