1995 International Conference on Acoustics, Speech, and Signal Processing
DOI: 10.1109/icassp.1995.479666
|View full text |Cite
|
Sign up to set email alerts
|

Language model representations for beam-search decoding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 33 publications
(29 citation statements)
references
References 7 publications
0
29
0
Order By: Relevance
“…All selection criteria have been empirically evaluated on a 10,000-word Italian newspaper dictation task [1]. Trigram selection methods have been applied for generating LMs of dierent sizes.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…All selection criteria have been empirically evaluated on a 10,000-word Italian newspaper dictation task [1]. Trigram selection methods have been applied for generating LMs of dierent sizes.…”
Section: Methodsmentioning
confidence: 99%
“…The u t ility o f i n troducing trigrams in a LM can also be measured with t h e amount of information about t h e current word (wt) gained by a ugmenting t h e context from (wt 1 Equations (9) and (10) explain the i n dividual contribution to the global information of contexts of type y and xy, respectively. T h e following score functions can thus be considered:…”
Section: Mutual Informationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, for continuous speech recognition systems using higher order language models the linguistic state cannot be determined locally and the word boundaries are uncertain. Several solutions based on creating copies of the PPT for each unique linguistic context solve this problem [8,9,10], however these approaches create redundant sub-tree computations, the number of which correspond to the number of active linguistic contexts. A computation is redundant when a sub-tree instance is dominated by another instance of that sub-tree.…”
Section: Re-entrant Vs Non Re-entrant Treesmentioning
confidence: 99%
“…To overcome this difficulty Antoniol et.al. [9] used successor trees where higher order language model probabilities are distributed over the appropriate tree. However successor trees do not eliminate the redundant computations revealed by subtree dominance.…”
Section: Using Lm Probabilities In a Pptmentioning
confidence: 99%