2018
DOI: 10.1007/978-3-030-01249-6_14
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Abstract: Deep Neural Networks (DNNs) have been widely applied in various recognition tasks. However, recently DNNs have been shown to be vulnerable against adversarial examples, which can mislead DNNs to make arbitrary incorrect predictions. While adversarial examples are well studied in classification tasks, other learning problems may have different properties. For instance, semantic segmentation requires additional components such as dilated convolutions and multiscale processing. In this paper, we aim to characteri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
58
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(58 citation statements)
references
References 46 publications
0
58
0
Order By: Relevance
“…Then, a simple non-differentiable detector, thus less prone to attacks, is sufficient to identify the attack. As shown by our experiments, our approach outperforms the state-of-the-art one of [43] for standard attacks, such as those introduced in [44,6].…”
Section: Introductionmentioning
confidence: 66%
See 3 more Smart Citations
“…Then, a simple non-differentiable detector, thus less prone to attacks, is sufficient to identify the attack. As shown by our experiments, our approach outperforms the state-of-the-art one of [43] for standard attacks, such as those introduced in [44,6].…”
Section: Introductionmentioning
confidence: 66%
“…Nevertheless, in [44,6], classification attack schemes were extended to semantic segmentation networks. However, as far as defense schemes are concerned, only [43] has proposed an attack detection method in this scenario. This was achieved by analyzing the spatial consistency of the predictions of overlapping image patches.…”
Section: Adversarial Attacks In Semantic Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper, we summarize applications in four fields. In the computer vision field, there are adversarial attacks in image classification [15,17,[24][25][26], semantic image segmentation, and object detection [27,28]. In natural language processing fields, there are adversarial attacks in machine translation [29] and text generation [30].…”
Section: Introductionmentioning
confidence: 99%