2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6639308
|View full text |Cite
|
Sign up to set email alerts
|

Paraphrastic language models and combination with neural network language models

Abstract: In natural languages multiple word sequences can represent the same underlying meaning. Only modelling the observed surface word sequence can result in poor context coverage, for example, when using n-gram language models (LM). To handle this issue, paraphrastic LMs were proposed in previous research and successfully applied to a US English conversational telephone speech transcription task. In order to exploit the complementary characteristics of paraphrastic LMs and neural network LMs (NNLM), the combination… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
7
0

Year Published

2013
2013
2014
2014

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 24 publications
(31 reference statements)
2
7
0
Order By: Relevance
“…Word error rate reductions of 1.3% (8% relative) absolute were obtained on a state-of-the-art large vocabulary speech recognition task. Consistent with the performance improvements previously obtained on back-off n-gram LMs [18,20,19], experimental results presented in this paper suggest the proposed method is also effective in improving the generalization performance of feedforward NNLMs. In contrast, previous research on NNLMs used no explicit paraphrastic modelling [2,27,25,13,22].…”
Section: Conclusion and Relation To Prior Worksupporting
confidence: 89%
See 2 more Smart Citations
“…Word error rate reductions of 1.3% (8% relative) absolute were obtained on a state-of-the-art large vocabulary speech recognition task. Consistent with the performance improvements previously obtained on back-off n-gram LMs [18,20,19], experimental results presented in this paper suggest the proposed method is also effective in improving the generalization performance of feedforward NNLMs. In contrast, previous research on NNLMs used no explicit paraphrastic modelling [2,27,25,13,22].…”
Section: Conclusion and Relation To Prior Worksupporting
confidence: 89%
“…This advantage can be exploited by many forms of LMs that do not explicitly capture the paraphrastic variability in natural languages. These models include, and are not restricted to, back-off n-gram LMs as investigated in previous research [18,19,20].…”
Section: Paraphrastic Counts Smoothingmentioning
confidence: 99%
See 1 more Smart Citation
“…The particular type of LMs considered in this paper can flexibly model paraphrase mapping at the word, phrase and sentence level. As LM probabilities are estimated in the paraphrased domain, they are referred to as paraphrastic language models (PLM) [16,17]. For a L word long word sequence W =< w1, w2, ..., wi, ..., wL > in the training data, rather than maximizing the surface word sequence's log-probability ln P (W) as for conventional LMs, the marginal probability over all paraphrase variant sequences is maximized,…”
Section: Paraphrastic Language Modelsmentioning
confidence: 99%
“…The statistics required for paraphrastic LM estimation are then accumulated from the paraphrase lattices via a forwardbackward pass. In order to improve phrase coverage, expert semantic labelling provided by resources, such as WordNet [5], can also be used to generate paraphrases [16,17].…”
Section: : End Formentioning
confidence: 99%