2022
DOI: 10.1007/978-3-031-19772-7_35
|View full text |Cite
|
Sign up to set email alerts
|

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity

Abstract: Transferability is the property of adversarial examples to be misclassified by other models than the surrogate model for which they were crafted. Previous research has shown that transferability is substantially increased when the training of the surrogate model has been early stopped. A common hypothesis to explain this is that the later training epochs are when models learn the non-robust features that adversarial attacks exploit. Hence, an early stopped model is more robust (hence, a better surrogate) than … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 21 publications
0
15
0
Order By: Relevance
“…Attacking multiple surrogates from a sufficiently large geometry vicinity (LGV) improves transferability [22].…”
Section: Conflictingmentioning
confidence: 99%
See 3 more Smart Citations
“…Attacking multiple surrogates from a sufficiently large geometry vicinity (LGV) improves transferability [22].…”
Section: Conflictingmentioning
confidence: 99%
“…(1) Attacking an ensemble of surrogates in the distribution found by Bayes improves the transferability [21,31] (2) SAM can be seen as a relaxation of Bayes [41].…”
Section: Dependentmentioning
confidence: 99%
See 2 more Smart Citations
“…However, they often overfit the surrogate models and thus exhibit poor transferability. Recently, many works have been proposed to generate more transferable adversarial examples [5,6,13,14,17,19,20,24,39,40,41,43,48], which we briefly summarize as below.…”
Section: Related Workmentioning
confidence: 99%