Proceedings of the 25th International Conference on Machine Learning - ICML '08 2008
DOI: 10.1145/1390156.1390196
|View full text |Cite
|
Sign up to set email alerts
|

An HDP-HMM for systems with state persistence

Abstract: The hierarchical Dirichlet process hidden Markov model (HDP-HMM) is a flexible, nonparametric model which allows state spaces of unknown size to be learned from data. We demonstrate some limitations of the original HDP-HMM formulation (Teh et al., 2006), and propose a sticky extension which allows more robust learning of smoothly varying dynamics. Using DP mixtures, this formulation also allows learning of more complex, multimodal emission distributions. We further develop a sampling algorithm that employs a t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
205
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 191 publications
(206 citation statements)
references
References 11 publications
1
205
0
Order By: Relevance
“…When modeling dynamical processes with mode persistence, the flexible nature of the HDP-HMM prior allows for mode sequences with unrealistically fast dynamics to have large posterior probability. Recently, it has been shown (Fox et al [2008a]) that one may mitigate this problem by instead considering a sticky HDP-HMM where π j is distributed as follows: π j ∼ DP(αβ + κδ j ) (5) Here, (αβ + κδ j ) indicates that an amount κ > 0 is added to the j th component of αβ, thus increasing the expected probability of self-transition. When κ = 0 the original HDP-HMM is recovered.…”
Section: Background: Dirichlet Processes and The Sticky Hdp-hmmmentioning
confidence: 99%
See 3 more Smart Citations
“…When modeling dynamical processes with mode persistence, the flexible nature of the HDP-HMM prior allows for mode sequences with unrealistically fast dynamics to have large posterior probability. Recently, it has been shown (Fox et al [2008a]) that one may mitigate this problem by instead considering a sticky HDP-HMM where π j is distributed as follows: π j ∼ DP(αβ + κδ j ) (5) Here, (αβ + κδ j ) indicates that an amount κ > 0 is added to the j th component of αβ, thus increasing the expected probability of self-transition. When κ = 0 the original HDP-HMM is recovered.…”
Section: Background: Dirichlet Processes and The Sticky Hdp-hmmmentioning
confidence: 99%
“…See Fox et al [2008a] for details of sampling ({π k }, β, α, κ, γ) given z 1:T . The sampler for the HDP-SLDS is identical with the additional step of sampling the state sequence, x 1:T , and conditioning on the state sequence when resampling dynamic parameters.…”
Section: Gibbs Samplermentioning
confidence: 99%
See 2 more Smart Citations
“…Over the past decade, nonparametric methods have been successfully applied to many existing graphical models, allowing them to grow the number of latent states as necessary to fit the data [1][2][3][4][5][6]. Infinite HCRFs were first presented in [7] and since exact inference for such models with an infinite number of parameters is intractable, inference was based on a Markov chain Monte Carlo (MCMC) sampling algorithm.…”
Section: Introductionmentioning
confidence: 99%