2022
DOI: 10.1109/access.2022.3208131
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification

Abstract: The popularity of adapting deep neural networks (DNNs) in solving hard problems has increased substantially. Specifically, in the field of computer vision, DNNs are becoming a core element in developing many image and video classification and recognition applications. However, DNNs are vulnerable to adversarial attacks, in which, given a well-trained image classification model, a malicious input can be crafted by adding mere perturbations to misclassify the image. This phenomena raise many security concerns in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 107 publications
0
12
0
Order By: Relevance
“…We cannot completely remove a trained model's vulnerabilities by applying a state-of-the-art method of model development. For example, despite many studies on adversarial examples [50][51][52][53][54][55], no machine learning algorithm is known to produce models that behave correctly for all adversarial examples (Section 6.6).…”
Section: Security Controls At the System Level In The Presence Of Vul...mentioning
confidence: 99%
See 1 more Smart Citation
“…We cannot completely remove a trained model's vulnerabilities by applying a state-of-the-art method of model development. For example, despite many studies on adversarial examples [50][51][52][53][54][55], no machine learning algorithm is known to produce models that behave correctly for all adversarial examples (Section 6.6).…”
Section: Security Controls At the System Level In The Presence Of Vul...mentioning
confidence: 99%
“…Survey literature. For technical details of attacks and defenses, see previous papers, e.g., [50][51][52][53][54][55].…”
Section: Vulnerabilities and Security Controlsmentioning
confidence: 99%
“…In recent years, the vulnerability of deep neural networks (DNNs) to adversarial attacks has sparked significant interest, leading to a growing body of research focused on interpreting adversarial attacks (Han et al, 2023) and devising defense and detection mechanisms (Khamaiseh et al, 2022). Various proposed methods include augmenting input images to enhance robustness against adversarial attacks (Frosio and Kautz, 2023), mapping adversarial images back to the clean distribution (Li et al, 2023), and using vector quantization (Dong and Mao, 2023).…”
Section: Related Work Adversarial Defensementioning
confidence: 99%
“…These bugs, often termed as "model bugs" in the context of CNNs, can manifest in various forms, including structural bugs and training bugs [5]. Not surprisingly, the implications of such bugs can be severe, ranging from financial losses to amplified safety concerns [6], [7].…”
Section: Introductionmentioning
confidence: 99%