2021
DOI: 10.1111/insr.12436
|View full text |Cite
|
Sign up to set email alerts
|

Initialization of Hidden Markov and Semi‐Markov Models: A Critical Evaluation of Several Strategies

Abstract: The expectation-maximization (EM) algorithm is a familiar tool for computing the maximum likelihood estimate of the parameters in hidden Markov and semi-Markov models. This paper carries out a detailed study on the influence that the initial values of the parameters impose on the results produced by the algorithm. We compare random starts and partitional and model-based strategies for choosing the initial values for the EM algorithm in the case of multivariate Gaussian emission distributions (EDs) and assess t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 72 publications
(72 reference statements)
1
9
0
Order By: Relevance
“…In simulations we find that this approach leads to significantly more accurate state estimates for both sparse 𝐾-means and the sparse jump model. It is a significant advantage of this approach to fitting jump models that it is far more robust to initialization than traditional maximum likelihood estimation of HMMs (Maruotti & Punzo, 2021). Similar to other studies, we find that 𝐾-means++ performs well in combination with repetitions (Celebi et al, 2013;Fränti & Sieranoja, 2019).…”
Section: Random Initialization Insupporting
confidence: 87%
See 1 more Smart Citation
“…In simulations we find that this approach leads to significantly more accurate state estimates for both sparse 𝐾-means and the sparse jump model. It is a significant advantage of this approach to fitting jump models that it is far more robust to initialization than traditional maximum likelihood estimation of HMMs (Maruotti & Punzo, 2021). Similar to other studies, we find that 𝐾-means++ performs well in combination with repetitions (Celebi et al, 2013;Fränti & Sieranoja, 2019).…”
Section: Random Initialization Insupporting
confidence: 87%
“…Bemporad et al (2018) proposed to fit jump models with 𝐾 states by minimizing the objective function In this article, we consider the squared Euclidean distance 𝓁(𝒚, 𝝁) = ‖𝒚 − 𝝁‖ 2 as loss function, which results in the objective function (1) for 𝜆 = 0 being the same as that for 𝐾-means clustering (Lloyd, 1982). It is not surprising that this loss function is useful for fitting jump models in light of 𝐾-means clustering being a successful initialization strategy for maximum likelihood estimation of hidden Markov and semi-Markov models (Maruotti & Punzo, 2021). We refer to the resulting model as the jump model or standard jump model.…”
Section: Jump Modelsmentioning
confidence: 99%
“…For real-data applications, well-established (random and deterministic) initialization strategies that are available for hidden Markov models should be used. A recent review on this very important aspect can be found in Maruotti and Punzo (2021). All algorithms are iterated until the change in the log-likelihood of two subsequent iterations is smaller than 10 −8 .…”
Section: Simulation Studymentioning
confidence: 99%
“…Combining the increased flexibility to capture a wide range of distributional shapes of the SDs with the well-known advantages of HMMs, HSMMs constitute a versatile framework in several spheres of application (see Guédon 2003 ; Barbu and Limnios 2009 ; Bulla et al. 2010 ; O’Connell and Højsgaard 2011 ; Yu 2015 ; Maruotti and Punzo 2021 and the references therein).…”
Section: Introductionmentioning
confidence: 99%