Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467386
|View full text |Cite
|
Sign up to set email alerts
|

Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes

Abstract: We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples for deep learning models solely based on information limited to output label (hard label) to a queried data input. We propose a simple and efficient Bayesian Optimization (BO) based approach for developing black-box adversarial attacks. Issues with BO's performance in high dimensions are avoided by searching for adversarial examples in a structured lowdimensional subspace. We demonstrate the efficacy of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
11
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…Adversarial training is a technique that has been shown to enhance the robustness of deep neural networks (DNNs) against adversarial attacks, particularly in static domains such as image [2,22,25] and graph [18,29,41,42] classification. This is achieved by incorporating adversarial examples, generated through adversarial attacks, into the training process.…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial training is a technique that has been shown to enhance the robustness of deep neural networks (DNNs) against adversarial attacks, particularly in static domains such as image [2,22,25] and graph [18,29,41,42] classification. This is achieved by incorporating adversarial examples, generated through adversarial attacks, into the training process.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, decision-based attacks with low query efficiency may be inapplicable in the real world. Recently, Shukla et al [22] presented a hard-label black-box attack in low query budget regimes through Bayesian optimization. Although the requisite number of queries is greatly reduced, the attack success rate remains unsatisfactory.…”
Section: Introductionmentioning
confidence: 99%
“…Deep neural networks (DNNs) have obtained extraordinary achievements in a broad spectrum of areas (Al-Saffar, Tao, and Talab 2017;Torfi et al 2020;Batmaz et al 2019), but many works reveal that DNNs are extremely vulnerable to adversarial attacks (Shukla et al 2021;Mao et al 2021). Specifically, adversarial attacks can easily fool DNNs by employing adversarial examples (AEs), generated by imposing slight carefully crafted noises into natural examples.…”
Section: Introductionmentioning
confidence: 99%
“…There are two types of black-box attacks: query-based attacks (Shukla et al 2021;Yan et al 2020;Croce and Hein 2020) and transfer-based attacks (Dong et al 2018(Dong et al , 2019. Typically, query-based attacks (Shukla et al 2021;Croce and Hein 2020) need an avalanche of queries to the target model for approximately estimating the information required (input gradient). However, this resource-intensive query budget is costly and inevitably alerts the model owner, significantly limiting their applicability.…”
Section: Introductionmentioning
confidence: 99%