2021
DOI: 10.48550/arxiv.2104.09425
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 0 publications
0
9
0
Order By: Relevance
“…In this section, we discuss how teachers' performance or architecture influences the student's performance. We conduct experiments using three different robust teacher models -adversarially trained WRT-34 [39], ResNet-50 [10], and ResNet-18 [43] on the CIFAR-10 dataset. All RNAS-Cl models, while achieving similar clean accuracy, exceed its counterpart by more than 10% in PGD accuracy.…”
Section: Teacher's Influence On Student's Performancementioning
confidence: 99%
“…In this section, we discuss how teachers' performance or architecture influences the student's performance. We conduct experiments using three different robust teacher models -adversarially trained WRT-34 [39], ResNet-50 [10], and ResNet-18 [43] on the CIFAR-10 dataset. All RNAS-Cl models, while achieving similar clean accuracy, exceed its counterpart by more than 10% in PGD accuracy.…”
Section: Teacher's Influence On Student's Performancementioning
confidence: 99%
“…However, since performance of pseudo-labelling itself depends on the amount of labeled data, the performance of their technique drops considerably as labeled data is reduced. To alleviate this problem, Sehwag et al [28] illustrated the benefit of using additional data generated from generative models for improving adversarial robustness.…”
Section: Related Workmentioning
confidence: 99%
“…Our work is different from these works since we are concerned with distributional robustness guarantees for DG, i.e., a certification that allows us to quantify the generalization performance on an unseen distribution rather than certifying the instance-wise performance (see Appendix C.3 for a comparison between point-wise and distributional robustness). Recently, certified robustness has gained attention in the context of certifying the performance of the classifier (in a distributional sense) on bounded distribution shifts [40,70,59].…”
Section: Domain Generalization and Domain Adaptationmentioning
confidence: 99%
“…To address this, we propose a distance normalization technique that uses the distance between the source P S and a unique reference distribution P S adv as the unit length in the representation space. This distribution P S adv consists of points (z , y) generated similarly to the CW-attack [59,13]: for each z from the source, z is the closest misclassified point (h(z ) = y). Using this, we report all distances in this paper as the normalized distance W2(P S ,•) ρ adv :=W2(P S ,P S adv ) .…”
Section: Domain Generalization Via Minimizing Wasserstein Distancementioning
confidence: 99%
See 1 more Smart Citation