2009 11th IEEE International Conference on High Performance Computing and Communications 2009
DOI: 10.1109/hpcc.2009.45
|View full text |Cite
|
Sign up to set email alerts
|

Fast Parallel Expectation Maximization for Gaussian Mixture Models on GPUs Using CUDA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(33 citation statements)
references
References 2 publications
0
33
0
Order By: Relevance
“…Several papers [30,31,32,33] have already considered the use of a GPU to calculate these conditional probabilities. This calculation is a weighted sum of Gaussian Probability Density Function (PDF) calculations, and while rearrangement of the algebra is possible the algorithm used is fixed.…”
Section: Computational Loadmentioning
confidence: 99%
See 1 more Smart Citation
“…Several papers [30,31,32,33] have already considered the use of a GPU to calculate these conditional probabilities. This calculation is a weighted sum of Gaussian Probability Density Function (PDF) calculations, and while rearrangement of the algebra is possible the algorithm used is fixed.…”
Section: Computational Loadmentioning
confidence: 99%
“…The high computational cost of these processes for large M and substantial amounts of data has already motivated research into the application of GPUs to this task, for example [30,31,32,33].…”
Section: Gmm Probability Computation Optimization Techniquesmentioning
confidence: 99%
“…Similarly to the approach of (Machlica et al, 2011) and (Kumar et al, 2009), in our proposal the main loop of the algorithm is implemented sequentially and different CUDA kernels are in charge of running different steps of the algorithm.…”
Section: Rationale Of the Parallelization Approachmentioning
confidence: 99%
“…The work includes a parallel version of the cmeans algorithm. Kumar et al (2009), the authors spread the EM algorithm over six CUDA kernels for a fast parallel parametric estimation of GMM. The work focus is in speeding up the EM algorithm through improvements of the kernels and data organization, not using it for specific problems.…”
Section: Previous Work On Parallel Implementation Of Em and Gmm Learningmentioning
confidence: 99%
“…All c j,i generate a multivariate Gaussian probability distribution using a fast parallel expectation maximization accelerated by GPU [26]:…”
Section: Second Order Featurementioning
confidence: 99%