Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2: Short Pa 2014
DOI: 10.3115/v1/e14-4008
|View full text |Cite
|
Sign up to set email alerts
|

Chasing Hypernyms in Vector Spaces with Entropy

Abstract: In this paper, we introduce SLQS, a new entropy-based measure for the unsupervised identification of hypernymy and its directionality in Distributional Semantic Models (DSMs). SLQS is assessed through two tasks: (i.) identifying the hypernym in hyponym-hypernym pairs, and (ii.) discriminating hypernymy among various semantic relations. In both tasks, SLQS outperforms other state-of-the-art measures.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
167
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 134 publications
(174 citation statements)
references
References 15 publications
2
167
0
Order By: Relevance
“…More recently, several studies suggest that DIH is not correct for all the cases (Santus et al, 2014;Rimell, 2014). For example, "American" is a hypernym of "Barack Obama" but the (politicsrelated) contexts of "Barack Obama" cannot be covered by those of "American".…”
Section: Unsupervised Measuresmentioning
confidence: 99%
“…More recently, several studies suggest that DIH is not correct for all the cases (Santus et al, 2014;Rimell, 2014). For example, "American" is a hypernym of "Barack Obama" but the (politicsrelated) contexts of "Barack Obama" cannot be covered by those of "American".…”
Section: Unsupervised Measuresmentioning
confidence: 99%
“…Earlier work on hypernym modeling was unsupervised, and leveraged various interpretations of the distributional hypothesis. 1 Most of the recent work on the subject is however supervised, and in the main based on using word embeddings as input for classification or prediction (e.g Baroni et al, 2012;Santus et al, 2014;Fu et al, 2014;Weeds et al, 2014;Sanchez Carmona and Riedel, 2017;Nguyen et al, 2017). As shown by Shwartz et al (2016), pattern-based and distributional evidences can be effectively combined within a neural architecture.…”
Section: Related Workmentioning
confidence: 99%
“…Distributional models come in both unsupervised and supervised flavors. Unsupervised metrics for hypernymy detection assume either that the hyponym's contexts are included in the hypernym's contexts (Weeds and Weir, 2003;Kotlerman et al, 2010) or that the linguistics contexts of a hyponym are more informative than the contexts of its hypernyms (Rimell, 2014;Santus et al, 2014). Supervised hypernymy classifiers represent the pair of words by combining their distributional vectors in different ways -concatenating them (Baroni et al, 2012) or subtracting them (Roller et al, 2014) and feeding the resulting vector to a supervised classifier like logistic regression.…”
Section: Related Workmentioning
confidence: 99%
“…Distributional models predict the hypernymy relations by combining raw distributional vectors of concepts in a pair (Baroni et al, 2012;Roller et al, 2014;Santus et al, 2014), whereas path-based models base predictions on lexico-syntactic paths from co-occurrence contexts obtained from a large corpus (Snow et al, 2004;Nakashole et al, 2012;. combine the path-based and distributional models to reach state-of-the-art performance in hypernymy detection.…”
Section: Introductionmentioning
confidence: 99%