2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533442
|View full text |Cite
|
Sign up to set email alerts
|

SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(17 citation statements)
references
References 13 publications
0
12
0
1
Order By: Relevance
“…These can be broadly achieved by using trainable-detector or statistical-analysis based methods. The former usually involves training a detector-network either directly on the clean and adversarial images in spatial [24,30,32] / frequency domain [14] or on logits computed by a pre-trained classifier [4]. Statisticalanalysis based methods employ statistical tests like maximum mean discrepancy [13] or propose measures [11,27] to identify perturbed images.…”
Section: Adversarial Detectionmentioning
confidence: 99%
“…These can be broadly achieved by using trainable-detector or statistical-analysis based methods. The former usually involves training a detector-network either directly on the clean and adversarial images in spatial [24,30,32] / frequency domain [14] or on logits computed by a pre-trained classifier [4]. Statisticalanalysis based methods employ statistical tests like maximum mean discrepancy [13] or propose measures [11,27] to identify perturbed images.…”
Section: Adversarial Detectionmentioning
confidence: 99%
“…A fundamental assumption of existing adversarial attack detection [46,37,59,60,23,39,40,35,12,25,18,1] as well as adversarial augmentation methods [39,40,35,12,25] is that adversarial attacks are known and samples can easily be generated using these attacks to train the detector or augment the main model being defended. This assumption, however, is not realistic, since more often than not the defender does not know the attacks a priori and therefore samples cannot be easily generated to train a supervised detector or train an adversarially-augmented model.…”
Section: Challenges and Rationalementioning
confidence: 99%
“…There are three main orthogonal approaches for combating adversarial attacks -(i) Using adversarial attacks as a data augmentation mechanism by including adversarially perturbed samples in the training data to induce robustness in the trained model [41,78,79,80,10,75,8,74]; (ii) Preprocessing the input data with a denoising function or deep network [38,15,22,24,45] to counteract the effect of adversarial perturbations; and (iii) Training an auxiliary network to detect adversarial examples and deny providing inference on adversarial samples [46,37,59,60,23,39,40,35,12,25,18,1]. Our work falls under adversarial example detection as it does not require retraining the main network (as in adversarial training) nor degrade the input quality (as in preprocessing defenses).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…ese studies all focused on the distribution of spatial probability (i.e., statistical probability) to make them reasonable. However, the latest studies in [9][10][11][12][13][14][15] indicated that adversarial examples are mainly concentrated in the high-frequency region. Moreover, Ilyas et al further illustrated in [16] that adversarial examples are not even bugs.…”
Section: Introductionmentioning
confidence: 99%