2020
DOI: 10.1007/978-3-030-58517-4_15
|View full text |Cite
|
Sign up to set email alerts
|

Yet Another Intermediate-Level Attack

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(38 citation statements)
references
References 23 publications
2
36
0
Order By: Relevance
“…To search for perturbations with better transferability, Intermediate Level Attack (ILA) [13] maximizes the scalar projection of the adversarial example onto a guided direction on a specific hidden layer. Motivated by ILA [13], [21] takes the advantage of auxiliary examples produced by a baseline attack and yields adversarial examples with better transferability.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…To search for perturbations with better transferability, Intermediate Level Attack (ILA) [13] maximizes the scalar projection of the adversarial example onto a guided direction on a specific hidden layer. Motivated by ILA [13], [21] takes the advantage of auxiliary examples produced by a baseline attack and yields adversarial examples with better transferability.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we will not devote our efforts in this stage also, in this paper. For the intermediate stage, some literatures [33,31,49,15,13,21,45,14,25] have explored to perturb the intermediate features to improve the transferability of adversarial examples. [13] develops a new paradigm to specifically optimize the transferability by utilizing the intermediate representation of a given adversarial example as directional guidance, which provides a reasonable proxy for generating the transferable perturbations.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, Xie et al [53] suggested to apply random and differentiable transformations to the inputs when performing attacks on pre-trained substitute models [53]. It was also widely explored to craft more transferable adversarial examples by optimizing on intermediate layers of the substitute models [61,22,19,27,14]. Unlike prior work that experimented on substitute models trained on the same set of data as that for training victim models, in this paper, we assume no access to the real victim training set and attempt to obtain models on a small number of auxiliary examples, e.g., 20 images from 2 classes.…”
Section: Related Workmentioning
confidence: 99%
“…These techniques are effective but with limited improvement space for adversarial attacks due to the small number of optimization steps here. Huang et al (2019) and Li, Guo, and Chen (2020) only perturbed features from the intermediate layer of DNNs, but the layer should be picked manually and carefully. There are also some augmentation-based methods (Xie et al 2019;Lin et al 2020;Liu et al 2017;Zhong and Deng 2020).…”
Section: Introductionmentioning
confidence: 99%