Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.584
|View full text |Cite
|
Sign up to set email alerts
|

XL-WiC: A Multilingual Benchmark for Evaluating Semantic Contextualization

Abstract: The ability to correctly model distinct meanings of a word is crucial for the effectiveness of semantic representation techniques. However, most existing evaluation benchmarks for assessing this criterion are tied to sense inventories (usually WordNet), restricting their usage to a small subset of knowledge-based representation techniques. The Word-in-Context dataset (WiC) addresses the dependence on sense inventories by reformulating the standard disambiguation task as a binary classification problem; but, it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 29 publications
(43 citation statements)
references
References 34 publications
0
43
0
Order By: Relevance
“…WiC(Pilehvar and Camacho-Collados 2019) is the only SuperGLUE task where systems need to model the semantics of words in context (extended to several more languages in XL-WiC[Raganato et al 2020]). In the Appendix we provide results for this task.…”
mentioning
confidence: 99%
“…WiC(Pilehvar and Camacho-Collados 2019) is the only SuperGLUE task where systems need to model the semantics of words in context (extended to several more languages in XL-WiC[Raganato et al 2020]). In the Appendix we provide results for this task.…”
mentioning
confidence: 99%
“…We use the crosslingual Wordin-Context dataset (XL-WiC; Raganato et al, 2020) with data in 12 diverse languages. The task is to predict whether an ambiguous word that appears in two different sentences share the same meaning.…”
Section: Word-in-contextmentioning
confidence: 99%
“…We follow Raganato et al (2020) and add a binary classification head on top of the pretrained MMLM model, which takes as input the concatenation of the target words' embedding in the two contexts. We use the output of the 24-th layer as the target words' representation.…”
Section: B12 Xl-wicmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, as an application of Word Sense Disambiguation (WSD) (Navigli, 2009(Navigli, , 2012, Word-in-Context (WiC) disambiguation has been framed as a binary classification task to identify if the occurrences of a target word in two contexts correspond to the same meaning or not. The release of the WiC dataset (Pilehvar and Camacho-Collados, 2019), followed by the Multilingual Word-in-Context (XL-WiC) dataset (Raganato et al, 2020), has helped provide a common ground for evaluating and comparing systems while encouraging research in WSD and context-sensitive word embeddings.…”
Section: Introductionmentioning
confidence: 99%