1997
DOI: 10.1006/csla.1996.0021
|View full text |Cite
|
Sign up to set email alerts
|

HMM topology design using maximum likelihood successive state splitting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
68
0

Year Published

1999
1999
2010
2010

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 103 publications
(68 citation statements)
references
References 21 publications
0
68
0
Order By: Relevance
“…In separate work, Sagayama et al [31] have proposed asynchronous transition HMMs (AT-HMMs) which model the temporal characteristics of each acoustic feature component separately. Their system uses a form of the successive state splitting algorithm [34,26] to learn the temporal and contextual characteristics of each feature. Using Mel-scale cepstra as observations, they report a significant reduction of errors compared to a standard HMM approach.…”
Section: Discussionmentioning
confidence: 99%
“…In separate work, Sagayama et al [31] have proposed asynchronous transition HMMs (AT-HMMs) which model the temporal characteristics of each acoustic feature component separately. Their system uses a form of the successive state splitting algorithm [34,26] to learn the temporal and contextual characteristics of each feature. Using Mel-scale cepstra as observations, they report a significant reduction of errors compared to a standard HMM approach.…”
Section: Discussionmentioning
confidence: 99%
“…Ikeda presented a similar scheme with an objective function built around Aikake's Information Criterion to limit over-fitting [Ikeda, 1993]. The speech recognition literature now contains numerous variants of this strategy, including maximum likelihood criteria for splitting [Ostendorf and Singer, 1997]; search by genetic algorithms [Yada et al, 1996, Takara et al, 1997; and splitting to describe exceptions [Fujiwara et al, 1995, Valtchev et al, 1997. Nearly all of these algorithms use beam search (generate-andtest with multiple heads) to compensate for dead-ends and declines in posterior probability; most of the computation is squandered.…”
Section: Hmm Inductionmentioning
confidence: 99%
“…Furthermore, it is impossible for a finite set of training data to cover every possible combination of contextual factors. Various parameter-tying techniques have been developed [7]- [11] to avoid this problem. Among them, a decision tree-based context-clustering technique [12] has been widely used.…”
Section: Introductionmentioning
confidence: 99%