2021
DOI: 10.48550/arxiv.2106.04156
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss

Abstract: Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or similar examples from the same class, closer together while keeping negative pairs far apart. Despite the empirical successes, theoretical foundations are limited -prior analyses assume conditional independence of the positive pairs given the same class label, but recent empirical applications use heavily correlated positive pair… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(21 citation statements)
references
References 42 publications
0
21
0
Order By: Relevance
“…In particular, it is shown that under conditional independence given the label and/or additional latent variables, representations learned by reconstruction-based self-supervised learning algorithms can achieve small errors in the downstream linear classification task (Arora et al, 2019;Tosh et al, 2021). More closely related to our work is the recent result of HaoChen et al (2021) that analyzed contrastive learning without assuming conditional independence of positive pairs. Based on the concept of augmentation graph, they showed that spectral decomposition on the augmented distribution leads to embeddings with provable accuracy guarantees under linear probe evaluation.…”
Section: Additional Related Workmentioning
confidence: 79%
See 4 more Smart Citations
“…In particular, it is shown that under conditional independence given the label and/or additional latent variables, representations learned by reconstruction-based self-supervised learning algorithms can achieve small errors in the downstream linear classification task (Arora et al, 2019;Tosh et al, 2021). More closely related to our work is the recent result of HaoChen et al (2021) that analyzed contrastive learning without assuming conditional independence of positive pairs. Based on the concept of augmentation graph, they showed that spectral decomposition on the augmented distribution leads to embeddings with provable accuracy guarantees under linear probe evaluation.…”
Section: Additional Related Workmentioning
confidence: 79%
“…Theoretical works on self-supervised learning. A recent line of theoretical works have studied selfsupervised learning (Arora et al, 2019;Tosh et al, 2021;HaoChen et al, 2021). In particular, it is shown that under conditional independence given the label and/or additional latent variables, representations learned by reconstruction-based self-supervised learning algorithms can achieve small errors in the downstream linear classification task (Arora et al, 2019;Tosh et al, 2021).…”
Section: Additional Related Workmentioning
confidence: 99%
See 3 more Smart Citations