2021
DOI: 10.48550/arxiv.2111.01124
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?

Abstract: Contrastive learning (CL) can learn generalizable feature representations and achieve state-of-the-art performance of downstream tasks by finetuning a linear classifier on top of it. However, as adversarial robustness becomes vital in image classification, it remains unclear whether or not CL is able to preserve robustness to downstream tasks. The main challenge is that in the 'self-supervised pretraining + supervised finetuning' paradigm, adversarial robustness is easily forgotten due to a learning task misma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 38 publications
(97 reference statements)
0
0
0
Order By: Relevance