2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6638228
|View full text |Cite
|
Sign up to set email alerts
|

Sparse hidden Markov models for purer clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 20 publications
0
10
0
Order By: Relevance
“…The sparsity for the transition probabilities of the ARHMM can be encouraged by introducing the norm [26] to the second term of equation (14), and then maximizing (15) where is the previous parameters estimation of SARHMM, is the matrix of transition probabilities, is the regularization norm. Here, although the norm encourages sparsity, we cannot directly use the norm, because the transition probability and the observation probability are stochastic matrices, that is, their entries are non-negative and the summation of each row must be 1; the norm of each row is also 1, thus regularization is meaningless.…”
Section: A Off-line Parameter Training Of Speech and Noise Sarhmmsmentioning
confidence: 99%
See 4 more Smart Citations
“…The sparsity for the transition probabilities of the ARHMM can be encouraged by introducing the norm [26] to the second term of equation (14), and then maximizing (15) where is the previous parameters estimation of SARHMM, is the matrix of transition probabilities, is the regularization norm. Here, although the norm encourages sparsity, we cannot directly use the norm, because the transition probability and the observation probability are stochastic matrices, that is, their entries are non-negative and the summation of each row must be 1; the norm of each row is also 1, thus regularization is meaningless.…”
Section: A Off-line Parameter Training Of Speech and Noise Sarhmmsmentioning
confidence: 99%
“…Here, although the norm encourages sparsity, we cannot directly use the norm, because the transition probability and the observation probability are stochastic matrices, that is, their entries are non-negative and the summation of each row must be 1; the norm of each row is also 1, thus regularization is meaningless. The in the norm is a regularization parameter, which encourages sparsity for [26]. By setting the derivative of equation (15) to zero and satisfying the constraints and for each state , we can obtain the update equation of transition probabilities of SARHMM: (16) where the maximization operation is added to make sure that all transition probabilities are greater than zero, and is the regularization term for the transition probability, which is defined as (17) where is the differential operator.…”
Section: A Off-line Parameter Training Of Speech and Noise Sarhmmsmentioning
confidence: 99%
See 3 more Smart Citations