2022
DOI: 10.48550/arxiv.2204.02683
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations

Abstract: Contrastive learning is a highly effective method which uses unlabeled data to produce representations which are linearly separable for downstream classification tasks. Recent works have shown that contrastive representations are not only useful when data come from a single domain, but are also effective for transferring across domains. Concretely, when contrastive representations are trained on data from two domains (a source and target) and a linear classification head is trained to predict labels using only… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Theoretical analyses for final data representation. For the analysis of the data representation learnability, many works focus on the final data representation (the optimal solution of the pretext task) and measure the quality of the final data representation in the downstream tasks by using a linear classifier (HaoChen et al 2021(HaoChen et al , 2022Arora et al 2019;Lee et al 2021;Tosh, Krishnamurthy, and Hsu 2021). The main difference in this line of work is how to obtain the final data representation.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Theoretical analyses for final data representation. For the analysis of the data representation learnability, many works focus on the final data representation (the optimal solution of the pretext task) and measure the quality of the final data representation in the downstream tasks by using a linear classifier (HaoChen et al 2021(HaoChen et al , 2022Arora et al 2019;Lee et al 2021;Tosh, Krishnamurthy, and Hsu 2021). The main difference in this line of work is how to obtain the final data representation.…”
Section: Related Workmentioning
confidence: 99%
“…Despite the empirical success of SSL (He et al 2020;Chen et al 2020;Chen and He 2021;Zhong et al 2022), there are only a few works that focus on data representation learnability (Arora et al 2019;Tosh, Krishnamurthy, and Hsu 2021;Lee et al 2021;HaoChen et al 2021HaoChen et al , 2022Tian 2022a,b;Wen and Li 2021;Liu et al 2021). However, studying the learnability is helpful in understanding why SSL models can obtain meaningful data representations.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation