2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00445
|View full text |Cite
|
Sign up to set email alerts
|

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

Abstract: Research on adversarial examples in computer visiontasks has shown that small, often imperceptible changes to an image can induce misclassification, which has security implications for a wide range of image processing systems. Considering L 2 norm distortions, the Carlini and Wagner attack is presently the most effective white-box attack in the literature. However, this method is slow since it performs a line-search for one of the optimization terms, and often requires thousands of iterations. In this paper, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
201
0
4

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 206 publications
(207 citation statements)
references
References 13 publications
2
201
0
4
Order By: Relevance
“…While adversarial formulations using Lagrangian relaxation like the C&W attack [7] and the EAD attack [8] are known to be more effective than PGD for finding minimal perturbations under L 2 and L 1 constraints, these formulations involve computationally expensive line searches, often requiring thousands of iterations to converge, making them computationally difficult to scale to 5000 ImageNet samples for a large number of models. Furthermore, because some of the models we tested operate with stochasticity during inference, we found that other state-of-the-art attacks formulated to efficiently find minimal perturbation distances [9,10] were generally difficult to tune, as their success becomes stochastic as they near the decision boundary. Thus we proceeded with the PGD formulation as we found it to be the most reliable, computationally tractable, and conceptually simple to follow.…”
Section: B Image Perturbations B1 White Box Adversarial Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…While adversarial formulations using Lagrangian relaxation like the C&W attack [7] and the EAD attack [8] are known to be more effective than PGD for finding minimal perturbations under L 2 and L 1 constraints, these formulations involve computationally expensive line searches, often requiring thousands of iterations to converge, making them computationally difficult to scale to 5000 ImageNet samples for a large number of models. Furthermore, because some of the models we tested operate with stochasticity during inference, we found that other state-of-the-art attacks formulated to efficiently find minimal perturbation distances [9,10] were generally difficult to tune, as their success becomes stochastic as they near the decision boundary. Thus we proceeded with the PGD formulation as we found it to be the most reliable, computationally tractable, and conceptually simple to follow.…”
Section: B Image Perturbations B1 White Box Adversarial Attacksmentioning
confidence: 99%
“…However, scratching beneath the surface reveals a different picture. These CNNs are easily fooled by imperceptibly small perturbations explicitly crafted to induce mistakes, usually referred to as adversarial attacks [6,7,8,9,10]. Further, they exhibit a surprising failure to recognize objects in images corrupted with different noise patterns that humans have no trouble with [11,12,13].…”
Section: Introductionmentioning
confidence: 99%
“…Decoupling direction and norm l 2 -attack (DDN) [72] DNNs, yet, safety verification of large DNNs remains challenging.…”
Section: Other Attacksmentioning
confidence: 99%
“…• Decoupled Direction and Norm l 2 -attack (DDN) [72] as in [140] with the following parameters: 1000 iterations, initial epsilon 1.0, and gamma 0.05.…”
Section: Attack Parametersmentioning
confidence: 99%
See 1 more Smart Citation