2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00078
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
134
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 160 publications
(137 citation statements)
references
References 10 publications
2
134
1
Order By: Relevance
“…Naseer et al [342] proposed self-supervised adversarial training, whereas adversarial training is independently analyzed for self-supervision by incorporating it in pretraining in [343]. Similarly, [344] [345], which uses perturbations in the image space as well as latent space of StyleGAN to make training more effective.…”
Section: A Model Alteration For Defensementioning
confidence: 99%
“…Naseer et al [342] proposed self-supervised adversarial training, whereas adversarial training is independently analyzed for self-supervision by incorporating it in pretraining in [343]. Similarly, [344] [345], which uses perturbations in the image space as well as latent space of StyleGAN to make training more effective.…”
Section: A Model Alteration For Defensementioning
confidence: 99%
“…Semi-and weak supervision could also be introduced into RGB-D salient object detection, by leveraging image-level tags [185] and pseudo pixel-wise annotations [188,190], to improve detection performance. Furthermore, several studies [191,192] have suggested that models pretrained using self-supervision can effectively be used to achieve better performance. Therefore, we could train saliency prediction models on large amounts of annotated RGB images in a self-supervised manner and then transfer the pre-trained models to the RGB-D salient object detection task.…”
Section: Different Supervision Strategiesmentioning
confidence: 99%
“…The labeling and sample efficiency challenges of deep learning, in fact, are further exacerbated by its vulnerability to adversarial attacks. The sample complexity of learning an adversarially robust model with current methods is significantly higher than that of standard learning [16]. Additionally, AT-based techniques have been observed to cause an undesirable decline in standard accuracy (the classification accuracy on unperturbed inputs) while increasing robust accuracy (the classification accuracy on worst-case perturbed inputs) [16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…Jiang et al [18] improved robustness by learning representations that were consistent under both augmented data and adversarial examples. Chen et al [16] generalized adversarial training to different self-supervised pretraining and fine-tuning schemes.…”
Section: Introductionmentioning
confidence: 99%