2018
DOI: 10.48550/arxiv.1804.07729
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ADef: an Iterative Algorithm to Construct Adversarial Deformations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(31 citation statements)
references
References 0 publications
0
31
0
Order By: Relevance
“…We also see other unique ways of rendering inputs adversarial for deep learning models. Alaifari et al [197] deformed image planes to construct adversarial examples. The techniques in [198] and [126] aim at perceptibility reduction of adversarial perturbations by directly focusing on 0 -norm reduction of perturbation vector.…”
Section: G Miscellaneous Attacksmentioning
confidence: 99%
“…We also see other unique ways of rendering inputs adversarial for deep learning models. Alaifari et al [197] deformed image planes to construct adversarial examples. The techniques in [198] and [126] aim at perceptibility reduction of adversarial perturbations by directly focusing on 0 -norm reduction of perturbation vector.…”
Section: G Miscellaneous Attacksmentioning
confidence: 99%
“…Therefore, we only include samples that can fool the model in less than a specific number of iterations. We use a threshold of 10 as the maximum number of iterations, and demonstrate results on classification, semantic segmentation and object detection 1 . We use the first 10 generated examples for each starting image in the segmentation and detection tasks.…”
Section: Resultsmentioning
confidence: 99%
“…Another line of work creates unrestricted adversarial examples that are not bounded by a norm threshold. One way to achieve this is by applying subtle geometric transformations such as spatial transformations [37,1], translations and rotations [14] or pose changes [2] to the inputs. Other works consider recoloring [19,25,4], intermediate features [13,26,41] and inserting new objects or patches in the image [6].…”
Section: Adversarial Examplesmentioning
confidence: 99%
See 1 more Smart Citation
“…The forenamed methods mostly conform to this by limiting the l p -norm of the difference between the clean data and its adversary to a small value. There are some others that maintain high perceptual similarity without the l p -norm limitation [17], [18], [19], [20], [21]. For example, Engstrom et al [19] found that neural networks are vulnerable to simple image transformations, such as rotation and translation.…”
Section: A Adversarial Examplesmentioning
confidence: 99%