2022
DOI: 10.48550/arxiv.2201.09650
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction

Yijun Yang,
Ruiyuan Gao,
Yu Li
et al.

Abstract: Adversarial examples (AEs) pose severe threats to the applications of deep neural networks (DNNs) to safety-critical domains, e.g., autonomous driving. While there has been a vast body of AE defense solutions, to the best of our knowledge, they all suffer from some weaknesses, e.g., defending against only a subset of AEs or causing a relatively high accuracy loss for legitimate inputs. Moreover, most existing solutions cannot defend against adaptive attacks, wherein attackers are knowledgeable about the defens… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…• EMShepherd protects the DNN model user's data privacy as it is agnostic to the model's inputs, which instead are always required by prior reconstruction-based detection methods [42,62]. The sensitive inputs should not be shared with third-party detectors.…”
Section: Advantagesmentioning
confidence: 99%
“…• EMShepherd protects the DNN model user's data privacy as it is agnostic to the model's inputs, which instead are always required by prior reconstruction-based detection methods [42,62]. The sensitive inputs should not be shared with third-party detectors.…”
Section: Advantagesmentioning
confidence: 99%