2020
DOI: 10.1109/access.2020.3031549
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Representation Learning: A Framework and Review

Abstract: Contrastive Learning has recently received interest due to its success in self-supervised representation learning in the computer vision domain. However, the origins of Contrastive Learning date as far back as the 1990s and its development has spanned across many fields and domains including Metric Learning and natural language processing. In this paper, we provide a comprehensive literature review and we propose a general Contrastive Representation Learning framework that simplifies and unifies many different… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
178
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 507 publications
(220 citation statements)
references
References 90 publications
0
178
0
Order By: Relevance
“…Contrastive learning has been successfully applied to unsupervised representation learning [19]- [21], [32], [33] in recent years. Typical contrastive losses are inspired by noise contrastive estimation [34] or N-pair losses [35], aiming to pull positive samples together while push away negative samples.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…Contrastive learning has been successfully applied to unsupervised representation learning [19]- [21], [32], [33] in recent years. Typical contrastive losses are inspired by noise contrastive estimation [34] or N-pair losses [35], aiming to pull positive samples together while push away negative samples.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…Nowadays, transformer architectures (e.g., [ 3 , 11 , 12 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 ]) are seen as the state of the art for deep learning of the type of present interest. As per the definition of the contrastive learning framework mentioned in [ 61 , 66 ], we add an extra autoencoder in which the encoder behaves as a projection head. The outputs of the transformer encoder, which we regard as representations, are to be of a higher dimension.…”
Section: Introductionmentioning
confidence: 99%
“…In the natural language and image processing fields, both supervised and unsupervised approaches enabled the creation of powerful pre-trained models, that are often employed in many different tasks [8,9,10]. Contrastive learning recently got a lot of attention due to its success for the unsupervised pre-training of DNNs, enabling to learn flexible representation without the need of having labels associated with the content [10,11]. Only the definition of positive and negative data pairs is required in order to learn a model that will produce a latent space reflecting semantic characteristics.…”
Section: Introductionmentioning
confidence: 99%