2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6639310
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of feedforward and recurrent neural network language models

Abstract: Research on language modeling for speech recognition has increasingly focused on the application of neural networks. Two competing concepts have been developed: On the one hand, feedforward neural networks representing an ngram approach, on the other hand recurrent neural networks that may learn context dependencies spanning more than a fixed number of predecessor words.To the best of our knowledge, no comparison has been carried out between feedforward and state-of-the-art recurrent networks when applied to s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
68
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 123 publications
(71 citation statements)
references
References 12 publications
3
68
0
Order By: Relevance
“…As is consistent with the literature, the recurrent network significantly outperforms any of the feed-forward models (Sundermeyer et al, 2013).…”
Section: High-order Lm Perplexitysupporting
confidence: 80%
“…As is consistent with the literature, the recurrent network significantly outperforms any of the feed-forward models (Sundermeyer et al, 2013).…”
Section: High-order Lm Perplexitysupporting
confidence: 80%
“…Most of the earlier latent semantic models for learning such vectors are designed for information retrieval (Deerwester et al 1990;Hofmann 1999;Blei et al 2003). In contrast, recent work on continuous space language models, which estimate the probability of a word sequence in a continuous space (Bengio et al 2003;Mikolov et al 2010), have advanced the state of the art in language modeling, outperforming the traditional n-gram model on speech recognition (Mikolov et al 2012;Sundermeyer et al 2013) and machine translation (Mikolov 2012;Auli et al 2013).…”
Section: Related Workmentioning
confidence: 99%
“…(Dungarwal et al, 2014) described the benefits of re-ranking the translation hypothesis using simple n-gram based language model. In recent years, the use of RNNLM have shown significant improvements over the traditional n-gram models (Sundermeyer et al, 2013). (Mikolov et al, 2010) and (Liu et al, 2014) have shown significant improvements in speech recognition accuracy using RNNLM .…”
Section: Related Workmentioning
confidence: 99%