2022
DOI: 10.48550/arxiv.2204.09358
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generative or Contrastive? Phrase Reconstruction for Better Sentence Representation Learning

Abstract: Though offering amazing contextualized tokenlevel representations, current pre-trained language models actually take less attention on acquiring sentence-level representation during its selfsupervised pre-training. If self-supervised learning can be distinguished into two subcategories, generative and contrastive, then most existing studies show that sentence representation learning may more benefit from the contrastive methods but not the generative methods. However, contrastive learning cannot be well compat… Show more

Help me understand this report

This publication either has no citations yet, or we are still processing them

Set email alert for when this publication receives citations?

See others like this or search for similar articles