2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03).
DOI: 10.1109/icassp.2003.1198840
|View full text |Cite
|
Sign up to set email alerts
|

Optimal clustering of multivariate normal distributions using divergence and its application to HMM adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
22
0

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 11 publications
0
22
0
Order By: Relevance
“…The centroid is the CCG that minimizes the sum of the WSKLD to all CGs. Here, we extend the results of [43] by modifying the cost function to (21). The mean of a CCG is thereby estimated aŝ…”
Section: Parameter Estimation Of Ccgsmentioning
confidence: 99%
See 2 more Smart Citations
“…The centroid is the CCG that minimizes the sum of the WSKLD to all CGs. Here, we extend the results of [43] by modifying the cost function to (21). The mean of a CCG is thereby estimated aŝ…”
Section: Parameter Estimation Of Ccgsmentioning
confidence: 99%
“…Like in [43],ˆ n is constrained to be diagonal during clustering. It can be seen from Equations (22) and (23) that the procedure of estimating the CCGs given the weights ĝ(n,m) is iterative.…”
Section: Parameter Estimation Of Ccgsmentioning
confidence: 99%
See 1 more Smart Citation
“…Clustering "raw" normal data sets is also an important algorithmic issue in computer vision and sound processing. For example, Myrvoll and Soong [3] consider this task for adapting hidden Markov model (HMM) parameters in a structured maximum a posteriori linear regression (SMAPLR), and obtained improved speech recognition rate. In computer vision, Gaussian mixture models (GMMs) abound from statistical image modeling learnt by the expectation-maximization (EM) soft clustering technique [4], and therefore represent a versatile source of raw Gaussian data sets to manipulate efficiently.…”
Section: Introductionmentioning
confidence: 99%
“…This is all the more important for applications that require to handle symmetric information-theoretic measures [3].…”
mentioning
confidence: 99%