2020
DOI: 10.1109/access.2020.2978056
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Examples Identification in an End-to-End System With Image Transformation and Filters

Abstract: Deep learning has been receiving great attention in recent years because of its impressive performance in many tasks. However, the widespread adoption of deep learning also becomes a major security risk for those systems as recent researches have pointed out the vulnerabilities of deep learning models. And one of the security issues related to deep learning models is adversarial examples that are an instance with very small, intentional feature perturbations that cause a machine learning model to make a wrong … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 29 publications
(61 reference statements)
0
1
0
Order By: Relevance
“…The verification ensemble then votes on all denoised images. Thang and Matsui [45] used image transformation and filter techniques to identify adversarial examples sensitive to geometry and frequency and to remove adversarial noise.…”
Section: Introductionmentioning
confidence: 99%
“…The verification ensemble then votes on all denoised images. Thang and Matsui [45] used image transformation and filter techniques to identify adversarial examples sensitive to geometry and frequency and to remove adversarial noise.…”
Section: Introductionmentioning
confidence: 99%