Interspeech 2018 2018
DOI: 10.21437/interspeech.2018-1070
|View full text |Cite
|
Sign up to set email alerts
|

i-Vectors in Language Modeling: An Efficient Way of Domain Adaptation for Feed-Forward Models

Abstract: We show an effective way of adding context information to shallow neural language models. We propose to use Subspace Multinomial Model (SMM) for context modeling and we add the extracted i-vectors in a computationally efficient way. By adding this information, we shrink the gap between shallow feed-forward network and an LSTM from 65 to 31 points of perplexity on the Wikitext-2 corpus (in the case of neural 5-gram model). Furthermore, we show that SMM i-vectors are suitable for domain adaptation and a very sma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…L EARNING word and document embeddings have proven to be useful in wide range of information retrieval, speech and natural language processing applications [1]- [5]. These embeddings elicit the latent semantic relations present among the co-occurring words in a sentence or bag-of-words from a document.…”
Section: Introductionmentioning
confidence: 99%
“…L EARNING word and document embeddings have proven to be useful in wide range of information retrieval, speech and natural language processing applications [1]- [5]. These embeddings elicit the latent semantic relations present among the co-occurring words in a sentence or bag-of-words from a document.…”
Section: Introductionmentioning
confidence: 99%