2018
DOI: 10.48550/arxiv.1811.08458
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Intermediate Level Adversarial Attack for Enhanced Transferability

Abstract: Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model. However, adversarial examples may be overfit to exploit the particular architecture and feature representation of a source model, resulting in sub-optimal black-box transfer attacks to other target models. This leads us to introduce the Intermediate Level Attack (ILA), which attempt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…Our work provides per layer conditions for robustness, which may be utilized for the selection of the best regularization parameters for each layer independently, given a data set and architecture through cross-validation. Although, we have not empirically shown in the paper, our approach theoretically should provide a level of robustness against intermediate level attacks recently introduced in [18]. Our work provides a theoretical backing for the empirical findings in [28,36] which state that Leaky ReLu, a modified version of the ReLu function, may be more robust comparatively.…”
Section: Related Workmentioning
confidence: 64%
“…Our work provides per layer conditions for robustness, which may be utilized for the selection of the best regularization parameters for each layer independently, given a data set and architecture through cross-validation. Although, we have not empirically shown in the paper, our approach theoretically should provide a level of robustness against intermediate level attacks recently introduced in [18]. Our work provides a theoretical backing for the empirical findings in [28,36] which state that Leaky ReLu, a modified version of the ReLu function, may be more robust comparatively.…”
Section: Related Workmentioning
confidence: 64%
“…Among them, PGD [50] is regarded as one of the most powerful attacks [2]. Notably, adversarial examples are discovered to be transferable [60,59] among different neural network classifiers, which inspired a series of black-box attacks [71,79,83,45,14,28]. On the other hand, universal (i.e., image-agnostic) adversarial perturbations are also discovered [53,41].…”
Section: Related Workmentioning
confidence: 99%