2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2018
DOI: 10.1109/icassp.2018.8461704
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network Language Modeling with Letter-Based Features and Importance Sampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
48
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 54 publications
(48 citation statements)
references
References 8 publications
0
48
0
Order By: Relevance
“…Their application varies from identifying patterns in text [42] to data extraction [43], automatic speech recognition, machine translation, and spell checking [44,45]. Neural Network Language models offer an improved version [46], both having the potential to be integrated into computer-assisted tools for supporting text reviewers.…”
Section: Natural Language Processing Approachesmentioning
confidence: 99%
“…Their application varies from identifying patterns in text [42] to data extraction [43], automatic speech recognition, machine translation, and spell checking [44,45]. Neural Network Language models offer an improved version [46], both having the potential to be integrated into computer-assisted tools for supporting text reviewers.…”
Section: Natural Language Processing Approachesmentioning
confidence: 99%
“…In order to improve the 1-best recognition hypotheses, we explored 3 types of NNLMs. Along with recurrent neural network language model (RNN-LM) integrated with the Kaldi toolkit [39], we also investigated Transformer-XL [40] and FRequency AGnostic word Embedding (FRAGE) with ASGD Weight-Dropped (AWD) Long Short-Term Memory (LSTM) Mixture of Softmaxes (MoS) [41], which are the current stateof-the-art for large and medium size vocabulary language modeling tasks respectively. Initial RNN-LM, Transformer-XL, and FRAGE with AWD-LSTM-MoS were trained on word-level training set transcriptions.…”
Section: Final Acoustic Modelsmentioning
confidence: 99%
“…Compared with PLSA and LDA, which describes the "word-document" co-occurrence, WVM attempts to discover the "word-word" co-occurrence dependence by latent topics. With the popularity of neural networks, neural language modeling such as recurrent neural network language model (RNNLM) was proposed in recent works [11]. Li et al [12] further proposed two adaptation models (a cache model and a DNN-based model) for RNNLM to capture the topic information and the long-distance triggers in ASR.…”
Section: Topic-based Language Model Adaptationmentioning
confidence: 99%