2019
DOI: 10.3233/sw-190349
|View full text |Cite
|
Sign up to set email alerts
|

Wan2vec: Embeddings learned on word association norms

Abstract: Word embeddings are powerful for many tasks in natural language processing. In this work, we learn word embeddings using weighted graphs from word association norms (WAN) with the node2vec algorithm. Although building WAN is a difficult and time-consuming task, training the vectors from these resources is a fast and efficient process. This allows us to obtain good quality word embeddings from small corpora. We evaluate our word vectors in two ways: intrinsic and extrinsic. The intrinsic evaluation was performe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Basic types of relations (synonymy, antonymy, hypernymy, meronymy) in the data have been identified and shared tasks on their automatic classification have been run [22]. Various approaches have been also proposed for learning vectors that capture the association relationship between words [23,24,25]. The discussed need for an explanation of word similarity as well as differences between related words links the work to the research on discriminative attribute identification [26] and explanation mechanisms in natural language inference systems [27].…”
Section: Related Workmentioning
confidence: 99%
“…Basic types of relations (synonymy, antonymy, hypernymy, meronymy) in the data have been identified and shared tasks on their automatic classification have been run [22]. Various approaches have been also proposed for learning vectors that capture the association relationship between words [23,24,25]. The discussed need for an explanation of word similarity as well as differences between related words links the work to the research on discriminative attribute identification [26] and explanation mechanisms in natural language inference systems [27].…”
Section: Related Workmentioning
confidence: 99%
“…Gemma et al [20] introduced a fast and efficient word embedding model with the weighted graph from word association norms (WAN). Although this model works well for the low-resource language, building WAN is still a difficult and time-consuming task.…”
Section: Word Embeddings For Low-resource Languagesmentioning
confidence: 99%