2023
DOI: 10.1016/j.neucom.2022.10.034
|View full text |Cite
|
Sign up to set email alerts
|

Robustness-via-synthesis: Robust training with generative adversarial perturbations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Although AT is expected to contribute to the generalization of the model for adversarial samples, we commonly observe several phenomena, such as catastrophic overfitting (Wong et al, 2020), label leaking (Kurakin et al, 2017), and gradient masking (Athalye et al, 2018;Ilyas et al, 2019) that result in overfitting to certain perturbations and hampering the training. One of the contributing factors to these challenges in AT is the adversarial sample generation based on the gradient of the cross-entropy loss (Baytaş & Deb, 2023). Therefore, the adversarial direction obtained via increasing the cross-entropy loss is insufficient to generate stronger and diverse attacks (Etmann et al, 2019).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Although AT is expected to contribute to the generalization of the model for adversarial samples, we commonly observe several phenomena, such as catastrophic overfitting (Wong et al, 2020), label leaking (Kurakin et al, 2017), and gradient masking (Athalye et al, 2018;Ilyas et al, 2019) that result in overfitting to certain perturbations and hampering the training. One of the contributing factors to these challenges in AT is the adversarial sample generation based on the gradient of the cross-entropy loss (Baytaş & Deb, 2023). Therefore, the adversarial direction obtained via increasing the cross-entropy loss is insufficient to generate stronger and diverse attacks (Etmann et al, 2019).…”
Section: Methodsmentioning
confidence: 99%
“…PGD can explore stronger attacks than a single-step adversarial attack such as FGSM. However, robustness literature often discusses that the trade-off between robustness and generalization inevitably grows when the PGD attack is used in AT (Baytaş & Deb, 2023). For this reason, various modifications and improvements are proposed to address the lack of generalization of PGD adversarial training (Wong & Kolter, 2018;.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…By concatenating a processed image hash sequence H and a feature vector I into a new feature vector I, we improved the robustness of models against adversarial examples. To investigate the effect of the ratio R of the length of I to the length of H in the feature vector I, we evaluated the performance of the model against the adversarial example methods FGSM, 16 PGD, 17 and one-pixel 22 with different perturbation budgets. The detailed classification accuracies of network C are listed in Table 1.…”
Section: Methodsmentioning
confidence: 99%
“…1. Data modification-based mechanisms 14 16 improve the robustness of neural networks against adversarial examples using improved data for training or testing. Hence, these methods are costly and require a large number of normal and adversarial examples to defend against a set of specific attacks.…”
Section: Introductionmentioning
confidence: 99%