2020
DOI: 10.1016/j.ins.2020.04.048
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced word embeddings using multi-semantic representation through lexical chains

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 34 publications
0
22
0
Order By: Relevance
“…The same way word2vec [18] inspired many models in NLP [4,26,25], the excellent performance of BERT [8], a Transformer-based model [28], caused its numerous adaption for language tasks [34,6,29,30]. Domain-specific models build on top of Transformers typically outperform their baselines for related tasks [12].…”
Section: Related Workmentioning
confidence: 99%
“…The same way word2vec [18] inspired many models in NLP [4,26,25], the excellent performance of BERT [8], a Transformer-based model [28], caused its numerous adaption for language tasks [34,6,29,30]. Domain-specific models build on top of Transformers typically outperform their baselines for related tasks [12].…”
Section: Related Workmentioning
confidence: 99%
“…Word2vec and doc2vec can both capture latent semantic meaning from textual data using efficient neural network language models. Prediction-based word embedding models, such as word2vec and doc2vec, have proven themselves superior to count-based models, such as BOW, for several problems in Natural Language Processing (NLP), such as quantifying word similarity [15], classifying documents [16], and analyzing sentiment [17]. Gharavi et al employed word embeddings to perform text alignment for sentences [18].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, the NLP community adapted and extended the neural language model BERT [20] for a variety of tasks [21]- [26], similar to the way that word2vec [13] has influenced many later models in NLP [15], [16], [27]. Based on the Transformer architecture [28], BERT employs two pre-training tasks, i.e., Masked Language Model (MLM) and Next Sentence Prediction (NSP), to capture general aspects of language.…”
Section: Related Workmentioning
confidence: 99%
“…Word embedding is a technique for analyzing the context of words in a sentence and converting them into a vector value. Word embedding methods include GloVe [16], Fasttext [17], and Word2vec [18]. Word2vec fails to consider the cooccurrence frequency of entire sentences because learning is performed only in a user-specified window.…”
Section: B Word2vec-based Word Embeddingmentioning
confidence: 99%
“…This generates a vector of words that are not found in the dictionary; moreover, its training process is fast [17]. Word2vec was proposed by Google after the neural network language model was improved [18]. It infers the meaning of words based on the distribution hypothesis that words in similar locations have a similar meaning.…”
Section: B Word2vec-based Word Embeddingmentioning
confidence: 99%