2021
DOI: 10.1016/j.knosys.2021.107141
|View full text |Cite
|
Sign up to set email alerts
|

Improving adversarial robustness of deep neural networks by using semantic information

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…Since the discovery that neural networks are vulnerable to artificial perturbations, a series of attack methods have emerged to evaluate the robustness of networks [3,6,17,18,25,26,29,34]. These attack models design small perturbations to clean samples to generate various adversarial examples.…”
Section: Attack Models and Defense Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Since the discovery that neural networks are vulnerable to artificial perturbations, a series of attack methods have emerged to evaluate the robustness of networks [3,6,17,18,25,26,29,34]. These attack models design small perturbations to clean samples to generate various adversarial examples.…”
Section: Attack Models and Defense Methodsmentioning
confidence: 99%
“…With the development of attack models, corresponding defense mechanisms have been continuously strengthened; see, e.g., adversarial training [18,34,39], certified robustness [5,10] and detection methods [9]. Among them, adversarial training [18], which minimizes the worst-case loss in the perturbation region, is considered to be one of the most powerful defenses.…”
Section: Attack Models and Defense Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Moosavi-Dezfooli et al [20] also created another method to compute the image-agnostic 'universal adversarial perturbations' based on DeepFool. Lina Wang et al [21] proposed TUP method to compute targeted universal adversarial perturbations that can extract semantic information that is missing by the classifiers, and they also found that adversarial training based on TUP greatly improves robustness of deep neural networks. Baluja and Fischer [22] used a feed-forward neural network named Adversarial Transformation Networks (ATNs) to generate adversarial examples.…”
Section: Related Workmentioning
confidence: 99%
“…The theoretical foundations behind the development of negative models are examined, including concepts such as the loss function and gradient optimization [3]. Additionally, three popular algorithms used to generate adversarial examples are discussed: the fast gradient sign method (FGSM), the projected gradient downward (PGD), and the Carlini-Wagner (CW) method [4][5][6][7]. The FGSM algorithm is one of the simplest and most widely used methods for generating adversarial examples.…”
Section: Introductionmentioning
confidence: 99%