2017
DOI: 10.48550/arxiv.1711.07183
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks Beyond the Image Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 26 publications
0
11
0
Order By: Relevance
“…We summarize the difference between our approach and the previous non-image adversarial attacks in Table 1. , in computing adversarial examples (Athalye et al, 2017;Zeng et al, 2017), and in generalizing neural style transfer to a 3D context (Kato et al, 2018;Liu et al, 2018) These renderers, however, are expensive to evaluate, requiring orders of magnitude more computation and much larger memory footprints compared to our method.…”
Section: Related Workmentioning
confidence: 98%
See 2 more Smart Citations
“…We summarize the difference between our approach and the previous non-image adversarial attacks in Table 1. , in computing adversarial examples (Athalye et al, 2017;Zeng et al, 2017), and in generalizing neural style transfer to a 3D context (Kato et al, 2018;Liu et al, 2018) These renderers, however, are expensive to evaluate, requiring orders of magnitude more computation and much larger memory footprints compared to our method.…”
Section: Related Workmentioning
confidence: 98%
“…This leads to unrealistic attack images that cannot model real-world scenarios (Goodfellow, 2018;Hendrycks & Dietterich, 2018;Gilmer et al, 2018). Zeng et al (2017) generate adversarial examples by altering physical parameters using a rendering network trained to approximate the physics of realistic image formation. This data-driven approach leads to an image formation model biased towards the rendering style present in the training data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other Threat Models Some recent work has focused on spatial threat models, which allow for slight perturbations of the locations of features in an input rather than perturbations of the features themselves [23,22,3]. Others have proposed threat models based on properties of a 3D renderer [25], modification of an image's hue and saturation [6], and inverting images [7]. See appendix D for discussion of non-additive threat models and comparison to our proposed functional threat model.…”
Section: Review Of Existing Threat Modelsmentioning
confidence: 99%
“…Other Threat Models A few papers have focused on threat models that are neither additive or spatial. Zeng et al [25] perturb the properties of a 3D renderer to render an image of an object which is unrecognizable to a classifier or other machine learning algorithm. Hosseini and Poovendran [6] propose "Semantic Adversarial Examples," which allow modifications of the input image's hue and saturation.…”
Section: Non-additive Threat Modelsmentioning
confidence: 99%