2023
DOI: 10.1145/3561970
|View full text |Cite
|
Sign up to set email alerts
|

A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned, and Perspectives

Abstract: Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various downstream tasks. These pretraining methods are frequently extended with recurrence, adversarial, or linguistic property masking. Recently, contrastive self-supervised training objectives have enabled successes in image representation pretraining by learning to contrast input-input pairs of augmented images as either similar or dissimilar. In NLP ho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(13 citation statements)
references
References 27 publications
0
13
0
Order By: Relevance
“…Wu et al (2020) explore input augmentation techniques for sentence representation learning with contrastive objectives. However, they use it as an auxiliary loss during full-fledged MLM pretraining from scratch (Rethmeier and Augenstein, 2021). In contrast, our post-hoc approach offers a lightweight and fast self-supervised transformation from any pretrained MLM to a universal language encoder at lexical or sentence level.…”
Section: Related Workmentioning
confidence: 99%
“…Wu et al (2020) explore input augmentation techniques for sentence representation learning with contrastive objectives. However, they use it as an auxiliary loss during full-fledged MLM pretraining from scratch (Rethmeier and Augenstein, 2021). In contrast, our post-hoc approach offers a lightweight and fast self-supervised transformation from any pretrained MLM to a universal language encoder at lexical or sentence level.…”
Section: Related Workmentioning
confidence: 99%
“…Contrastive Learning. Recently, numerous studies have been applying contrastive learning techniques to NLP tasks, yielding promising results (Giorgi et al 2021;Zhang et al 2022a,d,a;Rethmeier and Augenstein 2023;Lingling et al 2023). In the task of aspect-level sentiment classification, several insightful researchers have explored incorporating contrastive learning into the model training process.…”
Section: Related Workmentioning
confidence: 99%
“…Building on the success of contrastive data augmentation across various domains, including vision learning [7], text mining [18], and graph modeling [38], our DCRec method harnesses self-supervised signals through contrastive learning across different item semantic views. Nonetheless, the popularity bias is often overlooked, as conformity can entangle real interests and subsequently influence user behaviors [3,36].…”
Section: Adaptive Cross-view Contrastive Learningmentioning
confidence: 99%