2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00026
|View full text |Cite
|
Sign up to set email alerts
|

Strengthening the Transferability of Adversarial Examples Using Advanced Looking Ahead and Self-CutMix

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…In training dataset, this paper randomly selects 1000 normal images as target samples from 1000 classes in ImageNet dataset, 72 with one image selected from each class, and 15,000 adversarial examples are generated by 3 models: inception_v3, 73 inception_v4, 74 and resnet152, 14 using 5 adversarial attack algorithms: DIM, 75 FMFT-10, 76 LI-FGSM, 77 SIM, 78 and TIM 79 . Random transformations are applied to the input images at each iteration and diverse input patterns are created to improve the transferability of adversarial examples in DIM.…”
Section: Methodsmentioning
confidence: 99%
“…In training dataset, this paper randomly selects 1000 normal images as target samples from 1000 classes in ImageNet dataset, 72 with one image selected from each class, and 15,000 adversarial examples are generated by 3 models: inception_v3, 73 inception_v4, 74 and resnet152, 14 using 5 adversarial attack algorithms: DIM, 75 FMFT-10, 76 LI-FGSM, 77 SIM, 78 and TIM 79 . Random transformations are applied to the input images at each iteration and diverse input patterns are created to improve the transferability of adversarial examples in DIM.…”
Section: Methodsmentioning
confidence: 99%
“…Nesterov iterative method [16] accelerated the craft speed of adversarial examples by applying accelerated gradients into the attack algorithm. The lookahead iterative method [17] tuned the update direction by recording the gradient in multiple previous steps to get rid of suboptimal regions during the update of the adversarial examples. These gradient-based attacks make the generated adversarial perturbations more accurate by optimizing the gradient, which is highly effective in both white-box and black-box attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Most adversarial examples are inherently unstable. Previous studies [23], [65], [28] have shown that adversarial examples experience a trade-off between transferability and imperceptibility, i.e., the imperceptible adversarial examples generated from the surrogate model can hardly fool the target model. To show that adversarial texts also have low transferability, we first train two models from the same training dataset, generate adversarial texts from one model, and then transfer them to another.…”
Section: A Design Intuitionmentioning
confidence: 99%