2022
DOI: 10.1088/1742-6596/2203/1/012026
|View full text |Cite
|
Sign up to set email alerts
|

Improving the Transferability of Adversarial Examples by Using Generative Adversarial Networks and Data Enhancement

Abstract: Deep neural networks (DNNs) can be attacked by adversarial examples that are undetectable by humans. Generation-based approaches have recently gained popularity because they directly translate the input distribution to the distribution of adversarial instances, making them more effective and efficient. However, existing techniques are susceptible to overfitting on the substitute model, limiting the transferability of adversarial examples. In this paper, we introduce data augmentation into AdvGAN, called AdvGAN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…The problem with this method is that the added perturbation cannot be constrained properly and may result in invalid adversarial examples [13]. The GAN-based methods [15,16] generate adversarial examples based on generative adversarial networks. These methods train (i) a feed-forward generator network that generates perturbations to create diverse adversarial examples and (ii) a discriminator network to ensure that the generated examples are realistic; once the generator network is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses.…”
Section: Introductionmentioning
confidence: 99%
“…The problem with this method is that the added perturbation cannot be constrained properly and may result in invalid adversarial examples [13]. The GAN-based methods [15,16] generate adversarial examples based on generative adversarial networks. These methods train (i) a feed-forward generator network that generates perturbations to create diverse adversarial examples and (ii) a discriminator network to ensure that the generated examples are realistic; once the generator network is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses.…”
Section: Introductionmentioning
confidence: 99%