2022
DOI: 10.1002/int.22850
|View full text |Cite
|
Sign up to set email alerts
|

A data‐driven adversarial examples recognition framework via adversarial feature genomes

Abstract: Adversarial examples pose many security threats to convolutional neural networks (CNNs). Most defense algorithms prevent these threats by finding differences between the original images and adversarial examples. However, the found differences do not contain features about the classes, so these defense algorithms can only detect adversarial examples without recovering the correct labels. In this regard, we propose the Adversarial Feature Genome (AFG), a novel type of data that contain both the differences and f… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…[5][6][7][8][9][10] These perturbations are imperceptible to human beings but can easily fool DNNs, which raises invisible threats to the vision-based automatic decision. [11][12][13][14][15] Consequently, the robustness of DNNs encounters great challenges in real-world applications. 16,17 For example, the existence of AEs can pose severe security threats for traffic sign recognition in autonomous driving.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…[5][6][7][8][9][10] These perturbations are imperceptible to human beings but can easily fool DNNs, which raises invisible threats to the vision-based automatic decision. [11][12][13][14][15] Consequently, the robustness of DNNs encounters great challenges in real-world applications. 16,17 For example, the existence of AEs can pose severe security threats for traffic sign recognition in autonomous driving.…”
Section: Introductionmentioning
confidence: 99%
“…In the past decade, a series of studies have shown that DNNs are vulnerable to adversarial examples (AEs) by imposing some designed perturbations to original images 5–10 . These perturbations are imperceptible to human beings but can easily fool DNNs, which raises invisible threats to the vision‐based automatic decision 11–15 . Consequently, the robustness of DNNs encounters great challenges in real‐world applications 16,17 .…”
Section: Introductionmentioning
confidence: 99%