2021
DOI: 10.1016/j.patrec.2021.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial attacks through architectures and spectra in face recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Szegedy et al [59] indicated that deep neural network (DNN)-based models are vulnerable to adversarial attacks, and [58] mentioned the problems and limitations of adversarial attacks when using deep learning models to detect deepfakes. Currently, studies on defending against adversarial attacks [60,61] and attack defense techniques [62,63] are being actively studied. In future studies,we can improve the robustness against adversarial attacks using methods such as adversarial training [59] and defensive distillation [61].…”
Section: Comparison Test and Discussionmentioning
confidence: 99%
“…Szegedy et al [59] indicated that deep neural network (DNN)-based models are vulnerable to adversarial attacks, and [58] mentioned the problems and limitations of adversarial attacks when using deep learning models to detect deepfakes. Currently, studies on defending against adversarial attacks [60,61] and attack defense techniques [62,63] are being actively studied. In future studies,we can improve the robustness against adversarial attacks using methods such as adversarial training [59] and defensive distillation [61].…”
Section: Comparison Test and Discussionmentioning
confidence: 99%
“…The fast gradient sign method (FGSM) [44] is one of the most popular adversarial attack [48,49,50] techniques. The method uses the gradient information of the neural network to create an adversarial image.…”
Section: Fast Gradient Sign Methodsmentioning
confidence: 99%
“…The fast gradient sign method (FGSM) is one of the most popular adversarial attack techniques [313,34,432]. The method uses the gradient information of the neural network to create an adversarial image.…”
Section: Fast Gradient Sign Methodsmentioning
confidence: 99%