2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00957
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Adversarial Attacks with Momentum

Abstract: Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

10
1,946
2
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 2,230 publications
(1,960 citation statements)
references
References 12 publications
10
1,946
2
2
Order By: Relevance
“…The development of AEVis was in collaboration with the machine learning team that won first place in the NIPS 2017 non-targeted adversarial attack and targeted adversarial attack competitions, which aimed at attacking CNNs [40], [41]. Despite the promising results they achieved, the experts found the research process inefficient and inconvenient, especially in terms of the explanation of the model outputs.…”
Section: The Design Of Aevismentioning
confidence: 99%
See 1 more Smart Citation
“…The development of AEVis was in collaboration with the machine learning team that won first place in the NIPS 2017 non-targeted adversarial attack and targeted adversarial attack competitions, which aimed at attacking CNNs [40], [41]. Despite the promising results they achieved, the experts found the research process inefficient and inconvenient, especially in terms of the explanation of the model outputs.…”
Section: The Design Of Aevismentioning
confidence: 99%
“…He would like to see whether AEVis could help him gain a better understanding of the misclassification of adversarial examples. The DEV dataset contains 1000 images of different classes, and for each image, we generated an adversarial image using the non-targeted attacking method developed by the winning team [40], [52].…”
Section: Case Studymentioning
confidence: 99%
“…In [10], it was discovered that adding the momentum term into the iterative process for attacks lead to more stable optimization trajectory. It is determined that adversarial examples generated with the iterative method with the use of momentum are more suitable for white-box attacks than the ones generated without the use of momentum.…”
Section: B Related Workmentioning
confidence: 99%
“…Our core idea is to incorporate an l 2 norm-based adversarial attack into the training process, and leverage its perturbation magnitude as an estimation of the geometric margin. Current stateof-the-art attacks typically achieve ∌100% success rate on powerful DNNs [3], [19], [20], while the norm of perturbation can be reasonably small and thus fairly close to the real margin values. Since the adversarial perturbation is also parameterized by the network parameters (including weights and biases), our AMM regularizer can be jointly learned with the original objective through back-propagation.…”
Section: Introductionmentioning
confidence: 99%