2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.300
|View full text |Cite
|
Sign up to set email alerts
|

Universal Adversarial Perturbations Against Semantic Image Segmentation

Abstract: While deep learning is remarkably successful on perceptual tasks, it was also shown to be vulnerable to adversarial perturbations of the input. These perturbations denote noise added to the input that was generated specifically to fool the system while being quasi-imperceptible for humans. More severely, there even exist universal perturbations that are input-agnostic but fool the network on the majority of inputs. While recent work has focused on image classification, this work proposes attacks against semant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
151
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 248 publications
(153 citation statements)
references
References 27 publications
2
151
0
Order By: Relevance
“…There are also studies on semantic segmentation and object detection models in computer vision (Xie et al, 2017b;Metzen et al, 2017b). In both semantic segmentation and object detection tasks, the goal is to learn a model that associates an input image x with a series of labels Y = {y 1 , y 2 , ...y N }.…”
Section: Object Detection and Semantic Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…There are also studies on semantic segmentation and object detection models in computer vision (Xie et al, 2017b;Metzen et al, 2017b). In both semantic segmentation and object detection tasks, the goal is to learn a model that associates an input image x with a series of labels Y = {y 1 , y 2 , ...y N }.…”
Section: Object Detection and Semantic Segmentationmentioning
confidence: 99%
“…The work (Xie et al, 2017b) can generate an adversarial perturbation on x which can cause the classifier to give wrong prediction on all the output labels of the model, in order to fool either semantic segmentation or object detection models. The work (Metzen et al, 2017b) finds that there exists universal perturbation for any input image for semantic segmentation models.…”
Section: Object Detection and Semantic Segmentationmentioning
confidence: 99%
“…outline the "attack generator". • We validate our conceptual ideas by investigating the semantic segmentation adversarial attacks introduced in [27]. Furthermore, we present first small experiments, where we deduce new attacks by exchanging various measures of the original attack formulations.…”
Section: Introductionmentioning
confidence: 73%
“…For example, the perturbation always forces the victim model to output one fixed image of an empty street without any pedestrians or cars in sight. • Dynamic Target: This type of goal has also been introduced by Metzen et al [27] in the context of attacking semantic image segmentation. Here, the adversarial perturbation aims at keeping the ML module's output unchanged with the exception of removing certain target classes.…”
Section: Adversary's Goalsmentioning
confidence: 99%
“…ignoring pedes-trians on street). Metzen et al [11] proposed to generate adversarial examples so that the segmentation model incorrectly segments one cityscape as another one.…”
Section: Related Workmentioning
confidence: 99%