1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings
DOI: 10.1109/icassp.1996.543257
|View full text |Cite
|
Sign up to set email alerts
|

DP-based wordgraph pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…1 shows the architecture of the March 1996 VERBMOBIL prototype. After the recording of the spontaneous utterance, a WHG is computed by a standard Hidden Markov Model word recognizer [31], [49]. The word hypotheses in this graph are then enriched with prosodic information (cf.…”
Section: Verbmobil Systemmentioning
confidence: 99%
“…1 shows the architecture of the March 1996 VERBMOBIL prototype. After the recording of the spontaneous utterance, a WHG is computed by a standard Hidden Markov Model word recognizer [31], [49]. The word hypotheses in this graph are then enriched with prosodic information (cf.…”
Section: Verbmobil Systemmentioning
confidence: 99%
“…The acoustic scores of the components were simply added. The following example (see Figure 4) illustrates this method: bis zum vierundzwanzigsten 3 Recognizer output: bis zum vier und zwanzigsten After postprocessing: bis zum vierundzwanzigsten After inserting the compound candidates, the Dynamic-Programming-based pruning algorithm described in [3] was applied with the word-based language model to extract the best sentence. The whole-word recognition system (our baseline) yielded a 62.2% word accuracy 4 after graph pruning, whereas the component-based system could only detect 60.7% of the words in the test set correctly.…”
Section: Word Recognitionmentioning
confidence: 99%
“…In these experiments w e c hose a word graph with an average density o f 3 0 h ypotheses per word, because larger graphs are more likely to contain the right h ypothesis. We plan to integrate this search in the second stage of our speech recognition system[3].…”
mentioning
confidence: 99%
“…One way to further reduce the size of the lattices produced by the acoustic recognizer is to compress or prune them after decoding. Some algorithms have been developed for this purpose (Amtrup et al, 1996;Kuhn et al, 1996;Mangu and Brill, 1999;Mohri, 1997;Siztus and Ortmanns, 1999;Weng et al, 1998). Since most language processing algorithms applied to word lattices run in polynomial time with respect to the number of words in the representation, we have designed a new word graph compression algorithm to reduce the number of words in the graphical representation while maintaining the scored hypothesis information.…”
Section: Introductionmentioning
confidence: 99%