2020
DOI: 10.17352/tcsit.000017
|View full text |Cite
|
Sign up to set email alerts
|

A useful taxonomy for adversarial robustness of Neural Networks

Abstract: Jacobian regularization [13]), and provable defenses (i.e., Reluplex algorithm [14]). Adversarial training is a form of data augmentation where adversarial examples are added to or replace the benign training data. Adversarial training is an important defense discussed in the literature, and variations have been proposed, such as ensemble adversarial training where the adversarial examples are computed from a set

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…There are many methods and approaches used to increase robustness to adversarial attacks. Some researchers separate methods for ensuring robustness to competitive attacks into the following categories: gradient masking methods, robustness optimization methods, and methods of detecting adversarial examples [10]. Gradient masking includes some input data preprocessing methods (jpeg compression, random padding and resizing [11], discrete atomic compression [12]), defensive distillation [13], randomly choosing a model from a set of models or using dropout [14], and the use of generative models (i.e., PixelDefend [15] and Defense-GAN [16]).…”
Section: The State-of-the-artmentioning
confidence: 99%
See 1 more Smart Citation
“…There are many methods and approaches used to increase robustness to adversarial attacks. Some researchers separate methods for ensuring robustness to competitive attacks into the following categories: gradient masking methods, robustness optimization methods, and methods of detecting adversarial examples [10]. Gradient masking includes some input data preprocessing methods (jpeg compression, random padding and resizing [11], discrete atomic compression [12]), defensive distillation [13], randomly choosing a model from a set of models or using dropout [14], and the use of generative models (i.e., PixelDefend [15] and Defense-GAN [16]).…”
Section: The State-of-the-artmentioning
confidence: 99%
“…However, Carlini and Wagner [26] rigorously demonstrate that the properties of adversarial samples are difficult and resource-intensive to detect. The authors of [10,27,28] proposed to divide the methods of defense against adversarial attacks into two groups, implementing two separate principles: methods of increasing intra-class compactness and inter-class separation of feature vectors and methods of marginalizing or removing non-robust image features. The potential for the further development of these fundamental principles and their combination, while taking into account additional requirements and limits, is highlighted in this study [29,30].…”
Section: The State-of-the-artmentioning
confidence: 99%