2019
DOI: 10.1016/j.cose.2019.04.014
|View full text |Cite
|
Sign up to set email alerts
|

POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm

Abstract: Most deep learning models are easily vulnerable to adversarial attacks. Various adversarial attacks are designed to evaluate the robustness of models and develop defense model. Currently, adversarial attacks are brought up to attack their own target model with their own evaluation metrics. And most of the black-box adversarial attack algorithms cannot achieve the expected success rate compared with white-box attacks. In this paper,comprehensive evaluation metrics are brought up for different adversarial attack… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(21 citation statements)
references
References 58 publications
0
17
0
Order By: Relevance
“…Even if DNNs are black-box (e.g., model architectures and weights are unknown and loss gradient is not accessible), adversarial attacks on DNNs may be possible. Several methods for adversarial attacks on black-box DNNs, which estimate adversarial perturbations using only model outputs (e.g., confidence scores), have been proposed [35][36][37]. The development and operation of secure, privacy-preserving, and federated DNNs are required in medical imaging [6].…”
Section: Discussionmentioning
confidence: 99%
“…Even if DNNs are black-box (e.g., model architectures and weights are unknown and loss gradient is not accessible), adversarial attacks on DNNs may be possible. Several methods for adversarial attacks on black-box DNNs, which estimate adversarial perturbations using only model outputs (e.g., confidence scores), have been proposed [35][36][37]. The development and operation of secure, privacy-preserving, and federated DNNs are required in medical imaging [6].…”
Section: Discussionmentioning
confidence: 99%
“…Even if DNNs are blackbox (e.g., model architectures and weights are unknown and loss gradient is not accessible), adversarial attacks on DNNs may be possible. Several methods for adversarial attacks on black-box DNNs, which estimate adversarial perturbations using only model outputs (e.g., confidence scores), have been proposed [35][36][37]. The development and operation of secure, privacy-preserving, and federated DNNs are required in medical imaging [6].…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless, our findings may also be useful for developing black-box attack methods that estimate adversarial perturbations using only model outputs (e.g., confidence scores). Several methods for black-box attacks have been proposed [25][26][27] . Although they are limited to input-dependent adversarial attacks, universal adversarial attacks may be possible under the black-box condition because CNNs are sensitive to the directions of the Fourier basis functions .…”
Section: Discussionmentioning
confidence: 99%