2020
DOI: 10.48550/arxiv.2008.10150
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive learning, multi-view redundancy, and linear models

Abstract: Self-supervised learning is an empirically successful approach to unsupervised learning based on creating artificial supervised learning problems. A popular self-supervised approach to representation learning is contrastive learning, which leverages naturally occurring pairs of similar and dissimilar data points, or multiple views of the same data. This work provides a theoretical analysis of contrastive learning in the multi-view setting, where two views of each datum are available. The main result is that li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 20 publications
(37 reference statements)
0
12
0
Order By: Relevance
“…Class Collision and Uniformity for One-Class Classification. While contrastive representations have achieved state-of-the-art performance on visual recognition tasks (Oord et al, 2018;Hénaff et al, 2019;Wang & Isola, 2020) and have been theoretically proved to be effective for multi-class classification (Saunshi et al, 2019;Tosh et al, 2020), we argue that this could be problematic for one-class classification.…”
Section: Contrastive Learningmentioning
confidence: 70%
“…Class Collision and Uniformity for One-Class Classification. While contrastive representations have achieved state-of-the-art performance on visual recognition tasks (Oord et al, 2018;Hénaff et al, 2019;Wang & Isola, 2020) and have been theoretically proved to be effective for multi-class classification (Saunshi et al, 2019;Tosh et al, 2020), we argue that this could be problematic for one-class classification.…”
Section: Contrastive Learningmentioning
confidence: 70%
“…While many theoretical results on contrastive SSL (Arora et al, 2019;Lee et al, 2020;Tosh et al, 2020;Wen & Li, 2021) do exist, similar study on nc-SSL has been very rare. As one of the first work towards this direction, Tian et al (2021) show that while the global optimum of the non-contrastive loss is indeed a trivial one, following gradient direction in nc-SSL, one can find a local optimum that admits a nontrivial representation.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we further study the situation that the CI condition does not hold, and explore the idea of applying a learnable function to the input to make the CI condition hold. Other works (Saunshi et al, 2019;Tosh et al, 2020) study the generalization error of contrastive learning based SSL, whose setting is different from our paper. Most recently, Bansal et al (2020) analyze the generalization gap for most SSL methods.…”
Section: Related Workmentioning
confidence: 71%