2021
DOI: 10.1007/s00521-021-06330-x
|View full text |Cite
|
Sign up to set email alerts
|

Detect and defense against adversarial examples in deep learning using natural scene statistics and adaptive denoising

Abstract: Despite the enormous performance of deep neural networks (DNNs), recent studies have shown their vulnerability to adversarial examples (AEs), i.e., carefully perturbed inputs designed to fool the targeted DNN. Currently, the literature is rich with many effective attacks to craft such AEs. Meanwhile, many defense strategies have been developed to mitigate this vulnerability. However, these latter showed their effectiveness against specific attacks and does not generalize well to different attacks. In this pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
references
References 36 publications
0
0
0
Order By: Relevance