2022
DOI: 10.1609/aaai.v36i6.20595
|View full text |Cite
|
Sign up to set email alerts
|

Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks

Abstract: We propose a versatile framework based on random search, Sparse-RS, for score-based sparse targeted and untargeted attacks in the black-box setting. Sparse-RS does not rely on substitute models and achieves state-of-the-art success rate and query efficiency for multiple sparse attack models: L0-bounded perturbations, adversarial patches, and adversarial frames. The L0-version of untargeted Sparse-RS outperforms all black-box and even all white-box attacks for different models on MNIST, CIFAR-10, and ImageNet. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
43
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(44 citation statements)
references
References 40 publications
1
43
0
Order By: Relevance
“…We also train a set of 3 models with standard training (no attack). For 1 threat model, we train using adversarial examples generated via APGD (Croce & Hein, 2021). For 2 and ∞ threat models, we use PGD-adversarial training.…”
Section: C2 Additional Evaluation Detailsmentioning
confidence: 99%
See 2 more Smart Citations
“…We also train a set of 3 models with standard training (no attack). For 1 threat model, we train using adversarial examples generated via APGD (Croce & Hein, 2021). For 2 and ∞ threat models, we use PGD-adversarial training.…”
Section: C2 Additional Evaluation Detailsmentioning
confidence: 99%
“…For all other threat models, we the same attack generation method during training as used for evaluation as described in the previous section. For all threat models (with the exception of 1 threat model which uses settings from Croce & Hein (2021) and UAR attacks which use default settings from Kang et al ( 2019)), we use 20 iterations to find adversarial examples with step size /18. We train all models with batch size of 256 for 100 epochs and evaluate the model saved at the epoch which achieves highest robust accuracy on the test set.…”
Section: C2 Additional Evaluation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Following the settings in DiffPure [24], we use a fixed subset of 512 randomly sampled images. We use a naturally pretrained WideResNet-28-10 [40] as an underlying classifier provided by Robustbench [7]. For diffusion models, we use pretrained DDPM++ [35].…”
Section: Experimental Settingsmentioning
confidence: 99%
“…For instance, patchbased adversarial attacks, which usually extend into the physical world, do not limit the intensity of perturbation but the range scope. Such as adversarial-Yolo (Thys et al, 2019), DPatch , AdvCam (Duan et al, 2020), Sparse-RS (Croce et al, 2022). To obtain more human harmonious adversarial examples with acceptable attack success rate in the digital world, Xiao et al (2018) proposed the stAdv to generate adversarial examples by spatial transform to modify each pixel's position in the whole image.…”
mentioning
confidence: 99%