2017 IEEE Workshop on Information Forensics and Security (WIFS) 2017
DOI: 10.1109/wifs.2017.8267651
|View full text |Cite
|
Sign up to set email alerts
|

The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality

Abstract: Abstract-Recent research has shown that machine learning systems, including state-of-the-art deep neural networks, are vulnerable to adversarial attacks. By adding to the input object an imperceptible amount of adversarial noise, it is highly likely that the classifier can be tricked into assigning the modified object to any desired class. It has also been observed that these adversarial samples generalize well across models. A complete understanding of the nature of adversarial samples has not yet emerged. To… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
44
3
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 42 publications
(49 citation statements)
references
References 25 publications
1
44
3
1
Order By: Relevance
“…Defenses: Proposed defenses include detection and rejection methods [32,26,55,61,3,63], pre-processing, quantization and dimensionality reduction methods [12,73,7], manifold-projection methods [40,72,82,86], methods based on stochasticity/regularization or adapted architectures [109,7,68,88,35,43,76,45,51,107], ensemble methods [57,94,34,100], as well as adversarial training [109,65,36,83,90,54,62]; however, many defenses have been broken, often by considering "specialized" or novel attacks [13,15,5,6]. In [6], only adversarial training, e.g., the work by Madry et al [62], has been shown to be effective -although many recent defenses have not been studied extensively.…”
Section: Related Workmentioning
confidence: 99%
“…Defenses: Proposed defenses include detection and rejection methods [32,26,55,61,3,63], pre-processing, quantization and dimensionality reduction methods [12,73,7], manifold-projection methods [40,72,82,86], methods based on stochasticity/regularization or adapted architectures [109,7,68,88,35,43,76,45,51,107], ensemble methods [57,94,34,100], as well as adversarial training [109,65,36,83,90,54,62]; however, many defenses have been broken, often by considering "specialized" or novel attacks [13,15,5,6]. In [6], only adversarial training, e.g., the work by Madry et al [62], has been shown to be effective -although many recent defenses have not been studied extensively.…”
Section: Related Workmentioning
confidence: 99%
“…In a preliminary version of this paper, in the context of Euclidean spaces, we provided a theoretical explanation of the adversarial effect of perturbation for the closer + query scenario [28], in terms of the Local Intrinsic Dimensionality (LID) [29]- [31]. The LID characterizes the order of magnitude of the growth of probability measure with respect to a neighborhood of increasing radius.…”
Section: B Contributionsmentioning
confidence: 99%
“…First, phenomenological models are highly environment-specific, and any changes of the original field environment may result in a significant different outcome. Thus, they can show a considerable degree of stiffness in the prediction error, when not optimally trained against adversarial perturbations [23]. This means that in some cases the predictive errors may not have the smoothness the concept of applicability requires.…”
Section: Phenomenological Modelsmentioning
confidence: 99%