2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00443
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks Beyond the Image Space

Abstract: Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that correspond to meaningful changes in 3D physical properties (like rotation and translation, illumination condition, etc.). These adversaries arguably pose a more seri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
83
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 116 publications
(83 citation statements)
references
References 26 publications
0
83
0
Order By: Relevance
“…However, in a follow-up work Lu et al [19] showed that the detectors like YOLO 9000 [149] and FasterRCNN [150] are 'currently' not fooled by the attacks introduced by Etimov et al [75]. Zeng et al [87] also argue that adversarial perturbations in the image space do not generalize well in the physical space Fig. 7: Example of road sign attack [75]: The success rate of fooling LISA-CNN [75] classifier on all the shown images is 100%.…”
Section: Road Sign Attackmentioning
confidence: 99%
“…However, in a follow-up work Lu et al [19] showed that the detectors like YOLO 9000 [149] and FasterRCNN [150] are 'currently' not fooled by the attacks introduced by Etimov et al [75]. Zeng et al [87] also argue that adversarial perturbations in the image space do not generalize well in the physical space Fig. 7: Example of road sign attack [75]: The success rate of fooling LISA-CNN [75] classifier on all the shown images is 100%.…”
Section: Road Sign Attackmentioning
confidence: 99%
“…Our system can be used for mining adversarial examples of 3D scenes, since it provides the ability to backpropagate from image to scene parameters. A similar idea has been explored by Zeng et al [2017], but we use a more general renderer. We demonstrate this in Figure 10.…”
Section: D Adversarial Examplementioning
confidence: 99%
“…Beyond perturbations in texture form, Zeng et al [60] perturbed the physical parameters (normal, illumination and material) for untargeted attacks against 3D shape classification and a VQA system. However, for the differentiable renderer, they assume that the camera parameters are known beforehand and then perturb 2D normal maps under the fixed projection.…”
Section: Related Workmentioning
confidence: 99%
“…This means the manipulation space can be largely reduced due to image parameterization. 2) Constraints in 3D: 3D constraints such as physically possible shape geometry and texture are not directly reflected on 2D [60]. Human perception of an object are in 3D or 2.5D [34], and perturbation of shape or texture on 3D objects may directly affect human perception of them.…”
Section: Problem Definition and Challengesmentioning
confidence: 99%