2019 4th International Conference on Computer Science and Engineering (UBMK) 2019
DOI: 10.1109/ubmk.2019.8907181
|View full text |Cite
|
Sign up to set email alerts
|

Problems Caused by Semantic Drift in WordNet Synset Construction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 1 publication
0
1
0
Order By: Relevance
“…A neural language model trained on a dataset dating from 1988 or 2018 can therefore fail to perform adequately in tasks ranging from question answering to dialogue because the model is built on data that no longer represents current language. Several authors have studied the problems of semantic drift in Wordnet and other training sets [5,68,54] and proposed ways to address and mitigate it from a technical perspective [32,43], but these approaches are rarely incorporated in modern NLP models, which can be trained on large corpora reflecting decades of language data, including books from previous centuries [6].…”
Section: Technical Issuesmentioning
confidence: 99%
“…A neural language model trained on a dataset dating from 1988 or 2018 can therefore fail to perform adequately in tasks ranging from question answering to dialogue because the model is built on data that no longer represents current language. Several authors have studied the problems of semantic drift in Wordnet and other training sets [5,68,54] and proposed ways to address and mitigate it from a technical perspective [32,43], but these approaches are rarely incorporated in modern NLP models, which can be trained on large corpora reflecting decades of language data, including books from previous centuries [6].…”
Section: Technical Issuesmentioning
confidence: 99%