DOI: 10.1007/978-3-540-73437-6_4
|View full text |Cite
|
Sign up to set email alerts
|

Speeding Up HMM Decoding and Training by Exploiting Sequence Repetitions

Abstract: Abstract. We present a method to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete time-independent HMMs. We discuss the application of our method to Viterbi's decoding and training algorithms [33], as well as to the forward-backward and BaumWelch [6] algorithms. Our approach is based on identifying repeated substrings in the observed input sequence. Initially, we show how to exploit repetitions of all sufficiently small substrings (this is similar to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
15
0

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 24 publications
1
15
0
Order By: Relevance
“…Nevertheless, MCMC remains substantially slower than training one model and running Viterbi once and the loss in reliability introduced by relying on one ML or MAP model is ignored in practice. For discrete emissions, compressing sequences and computing forward and backward variables and Viterbi paths on the compressed sequences yields impressive speed-ups [ 19 ]. However, discretization of continuous emissions, similar to vector quantization used in speech recognition [ 18 ], is not viable as the separation between the different classes of observations is low since the observations are one-dimensional .…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, MCMC remains substantially slower than training one model and running Viterbi once and the loss in reliability introduced by relying on one ML or MAP model is ignored in practice. For discrete emissions, compressing sequences and computing forward and backward variables and Viterbi paths on the compressed sequences yields impressive speed-ups [ 19 ]. However, discretization of continuous emissions, similar to vector quantization used in speech recognition [ 18 ], is not viable as the separation between the different classes of observations is low since the observations are one-dimensional .…”
Section: Introductionmentioning
confidence: 99%
“…Researchers have mapped HMM based applications to GPU and achieved order of magnitude speedup. They have applied task parallel [19]- [23], data parallel [24]- [27], and combination of task and data parallel [28]- [32] approaches for HMM. Similar approaches can be adopted to improve the performance of stochastic automata.…”
Section: Accelerating Forward Algorithmmentioning
confidence: 99%
“…Mozes et al presented a method [18] to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete time-independent HMMs and discussed the application of this method to Viterbi's decoding and training algorithms [23], as well as to the forward -backward and Baum -Welch [5] algorithms. The presented approach was based on identifying repeated substrings in the observed input sequence.…”
Section: Introductionmentioning
confidence: 99%