2021
DOI: 10.48550/arxiv.2101.11466
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting

Federico Nesti,
Alessandro Biondi,
Giorgio Buttazzo

Abstract: Over the last few years, convolutional neural networks (CNNs) have proved to reach super-human performance in visual recognition tasks. However, CNNs can easily be fooled by adversarial examples, i.e., maliciously-crafted images that force the networks to predict an incorrect output while being extremely similar to those for which a correct output is predicted. Regular adversarial examples are not robust to input image transformations, which can then be used to detect whether an adversarial example is presente… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…This is because the optimization process is simpler when not considering the randomized transformations. However, it is important to note that these patches would not be transferrable to the real world, and are not robust even to simple transformations [10,22].…”
Section: Eot-based Patches On Cityscapesmentioning
confidence: 99%
“…This is because the optimization process is simpler when not considering the randomized transformations. However, it is important to note that these patches would not be transferrable to the real world, and are not robust even to simple transformations [10,22].…”
Section: Eot-based Patches On Cityscapesmentioning
confidence: 99%
“…Along with the surge of attack algorithms, there has been an increase in the development of defense algorithms such as Adversarial Training (AT) [17], input transformation [18] [19], gradient obfuscation [20], and stochastic defense via randomization [21] [22][23] [24][25] [26] [27]. However, limitations of existing defense techniques have also been observed [28][29] [30].…”
Section: Introductionmentioning
confidence: 99%