Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1318
|View full text |Cite
|
Sign up to set email alerts
|

Relational Word Embeddings

Abstract: While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding. Such strategies may not be optimal, however, as they are limited by the coverage of available resources and conflate similarity with other forms of relatedness. As an alternative, in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 35 publications
(39 reference statements)
0
9
0
Order By: Relevance
“…However, we have also noted some potential issues regarding circularity within this approach, and emphasized that purely associative networks may not be sufficient to fully characterize the vast amount of relational information available to humans. We have also provided future directions to integrate this information into existing networks by utilizing distributional models (e.g., ConceptNet Numberbatch) and techniques being widely applied in machine learning (e.g., Camacho‐Collados et al., 2019). Similarly, in thinking about the processes that drive the network‐based perspective, as discussed, the notion of random walks, automatic spreading activation versus attentional analytic processes, and growth mechanisms have all been applied to explain a wide range of cognitive phenomena.…”
Section: Discussionmentioning
confidence: 99%
“…However, we have also noted some potential issues regarding circularity within this approach, and emphasized that purely associative networks may not be sufficient to fully characterize the vast amount of relational information available to humans. We have also provided future directions to integrate this information into existing networks by utilizing distributional models (e.g., ConceptNet Numberbatch) and techniques being widely applied in machine learning (e.g., Camacho‐Collados et al., 2019). Similarly, in thinking about the processes that drive the network‐based perspective, as discussed, the notion of random walks, automatic spreading activation versus attentional analytic processes, and growth mechanisms have all been applied to explain a wide range of cognitive phenomena.…”
Section: Discussionmentioning
confidence: 99%
“…Another line of work has therefore advocated to directly learn relation vectors from distributional statistics, i.e. vectors encoding the relationship between two words (Washio and Kato 2018a;Jameel, Bouraoui, and Schockaert 2018;Espinosa Anke and Schockaert 2018;Joshi et al 2019;Washio and Kato 2018b;Camacho-Collados et al 2019). Inducing knowledge from language models.…”
Section: Related Workmentioning
confidence: 99%
“…This is most clearly illustrated in the fact that predicting analogical word pairs is a commonly used benchmark for evaluating word embeddings. The problem of relation induction using word embeddings has also been studied (Vylomova et al 2016;Drozd, Gladkova, and Matsuoka 2016;Bouraoui, Jameel, and Schockaert 2018;Vulić and Mrkšić 2018;Camacho-Collados, Espinosa-Anke, and Schockaert 2019). In this case, new instances of the relation are predicted based only on pre-trained word vectors.…”
Section: Introductionmentioning
confidence: 99%
“…Levy and Goldberg (2014a) used dependencybased contexts, resulting in two separate vector spaces; however, the relation types were embedded into the vocabulary and the model was trained only in one direction. Camacho-Collados et al (2019) proposed to learn separate sets of relation vectors in addition to standard word vectors and showed that such relation vectors encode knowledge that is often complementary to what is coded in word vectors. Rei et al (2018) and Vulić and Mrkšić (2018) described related task-dependent neural nets for mapping word embeddings into relation-specific spaces for scoring lexical entailment.…”
Section: Background and Motivationmentioning
confidence: 99%