2022
DOI: 10.48550/arxiv.2210.05968
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Abstract: Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples, which can produce erroneous predictions by injecting imperceptible perturbations. In this work, we study the transferability of adversarial examples, which is significant due to its threat to real-world applications where model architecture or parameters are usually unknown. Many existing works reveal that the adversarial examples are likely to overfit the surrogate model that they are generated from, limiting its transfer at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…To further illustrate the effectiveness and efficiency of GSA, we also compare GSA with state-of-the-art methods from other categories. We take a feature disruption method TAIG-R [14] and an advanced gradient-based method RAP [25] as additional baselines.…”
Section: Setupmentioning
confidence: 99%
See 3 more Smart Citations
“…To further illustrate the effectiveness and efficiency of GSA, we also compare GSA with state-of-the-art methods from other categories. We take a feature disruption method TAIG-R [14] and an advanced gradient-based method RAP [25] as additional baselines.…”
Section: Setupmentioning
confidence: 99%
“…We also extend GSA to targeted attacks. Following the previous work [25], We set the l ∞ magnitude of perturbation = 16/255, the number of iterations T = 10, and the step size α = 2/255. Experimental results with different are shown in Appendix B.2.…”
Section: Setupmentioning
confidence: 99%
See 2 more Smart Citations