2021
DOI: 10.48550/arxiv.2112.06733
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Measuring Context-Word Biases in Lexical Semantic Datasets

Abstract: State-of-the-art contextualized models such as BERT use tasks such as WiC and WSD to evaluate their word-in-context representations. This inherently assumes that performance in these tasks reflect how well a model represents the coupled word and context semantics. This study investigates this assumption by presenting the first quantitative analysis (using probing baselines) on the context-word interaction being tested in major contextual lexical semantic tasks. Specifically, based on the probing baseline perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 24 publications
(28 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?