2021
DOI: 10.48550/arxiv.2110.08256
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

Abstract: The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness. However, the progress is usually hampered by insufficient robustness evaluations. As the de facto standard to evaluate adversarial robustness, adversarial attacks typically solve an optimization problem of crafting adversarial examples with an iterative process. In this work, we propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…They apply AutoAttack on tens of previous defenses and provide a comprehensive leader board. [955] propose MAMA based on training meta optimizers, which is computationally more efficient than AutoAttack with comparable attacking effectiveness. [956] propose the black-box RayS attack, and establish a similar leader board for defenses.…”
Section: Benchmarksmentioning
confidence: 99%
“…They apply AutoAttack on tens of previous defenses and provide a comprehensive leader board. [955] propose MAMA based on training meta optimizers, which is computationally more efficient than AutoAttack with comparable attacking effectiveness. [956] propose the black-box RayS attack, and establish a similar leader board for defenses.…”
Section: Benchmarksmentioning
confidence: 99%
“…Despite its † Corresponding authors. blooming development [5,27,34], recent research in adversarial machine learning has revealed that face recognition models based on deep neural networks are highly vulnerable to adversarial examples [11,42], leading to serious consequences or security problems in real-world applications.…”
Section: Introductionmentioning
confidence: 99%