2015
DOI: 10.1016/j.csl.2015.01.004
|View full text |Cite
|
Sign up to set email alerts
|

Latent semantics in language models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…We estimated the n-gram frequencies from Wikipedia. We used the language model from (Brychcín and Konopík, 2015), Stanford CoreNLP for part-of-speech tags (Toutanova et al, 2003), the MIT Java Wordnet Interface (Finlayson, 2014), and the Brainy implementation of maximum entropy classifier (Konkol, 2014).…”
Section: Methodsmentioning
confidence: 99%
“…We estimated the n-gram frequencies from Wikipedia. We used the language model from (Brychcín and Konopík, 2015), Stanford CoreNLP for part-of-speech tags (Toutanova et al, 2003), the MIT Java Wordnet Interface (Finlayson, 2014), and the Brainy implementation of maximum entropy classifier (Konkol, 2014).…”
Section: Methodsmentioning
confidence: 99%
“…Word-based semantic spaces provide impressive performance in a variety of NLP tasks, such as language modeling [2], named entity recognition [14], sentiment analysis [11], and many others.…”
Section: Distributional Semantic Modelsmentioning
confidence: 99%
“…Similar techniques have been explored for topic identification and dynamic language model adaptation using vector space model [12], LSA [13], relevance language model [14], semi-supervised language models [15] and topic tracking language model [16]. LDA technique has been widely explored to form unsupervised adapted language model [17] and topic-specific language models for inflectional languages [18]. For many languages the linguistic word level approach [19], syntactico-statistical approach [19] and statistic phrase level approach [20] has been used to build an adapted language model for improving the speech recognition rate.…”
Section: Related Workmentioning
confidence: 99%