Findings of the Association for Computational Linguistics: EMNLP 2023 2023
DOI: 10.18653/v1/2023.findings-emnlp.729
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words

Hiroto Kurita,
Goro Kobayashi,
Sho Yokoi
et al.

Abstract: The performance of sentence encoders can be significantly improved through the simple practice of fine-tuning using contrastive loss. A natural question arises: what characteristics do models acquire during contrastive learning? This paper theoretically and experimentally shows that contrastive-based sentence encoders implicitly weight words based on informationtheoretic quantities; that is, more informative words receive greater weight, while others receive less. The theory states that, in the lower bound of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 21 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?