IEEE International Conference on Acoustics Speech and Signal Processing 2002
DOI: 10.1109/icassp.2002.1005716
|View full text |Cite
|
Sign up to set email alerts
|

Confidence scoring based on backward language models

Abstract: In this paper we introduce the backward N-gram language model (LM) scores as a confidence measure in large vocabulary continuous speech recognition.Contrary to a forward N-gram LM, in which the probability of a word is dependent on the preceding words, a word in a backward N-gram LM is predicted based on the following words only. So the backward LM is a model for sentences read from the end to the beginning.We show on the benchmark 20k word Wall Street Journal recognition task that the backward LM scores conta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2006
2006
2016
2016

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…These features are extracted the same way as the suprisal features, but based on language models trained on sentence-level reversed text. The backward language model features are popular in translation quality estimation studies and show interesting results (Duchateau et al, 2002;Rubino et al, 2013b).…”
Section: Datasetsmentioning
confidence: 99%
“…These features are extracted the same way as the suprisal features, but based on language models trained on sentence-level reversed text. The backward language model features are popular in translation quality estimation studies and show interesting results (Duchateau et al, 2002;Rubino et al, 2013b).…”
Section: Datasetsmentioning
confidence: 99%
“…Sánchez and Benedí (2006) use a stochastic BTG to obtain bilingual phrases for phrase-based SMT. Duchateau et al (2002) use the score estimated by a backward language model in a post-processing step as a confidence measure to detect wrongly recognized words in speech recognition. Duchateau et al (2002) use the score estimated by a backward language model in a post-processing step as a confidence measure to detect wrongly recognized words in speech recognition.…”
Section: Summary and Additional Readingsmentioning
confidence: 99%
“…Since the context ‘history’ in the backward language model is actually the future words to be generated, the backward language model is normally used in a post-processing where all words have already been generated or in a scenario where sentences are reversed. Duchateau, Demuynck and Wambacq (2002) use the backward language model score as a confidence measure to detect wrongly recognized words in speech recognition. Finch and Sumita (2009) use the backward language model in their reverse translation decoder where source sentences are reversed.…”
Section: Related Workmentioning
confidence: 99%