Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia 2021
DOI: 10.1145/3475724.3483610
|View full text |Cite
|
Sign up to set email alerts
|

Frequency Centric Defense Mechanisms against Adversarial Examples

Abstract: Adversarial example (AE) aims at fooling a Convolution NeuralNetwork by introducing small perturbations in the input image. The proposed work uses the magnitude and phase of the Fourier Spectrum and the entropy of the image to defend against AE. We demonstrate the defense in two ways: by training an adversarial detector and denoising the adversarial effect. Experiments were conducted on the low-resolution CIFAR-10 and high-resolution ImageNet datasets. The adversarial detector has 99% accuracy for FGSM and PGD… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…Notable in relation to our work are methods to design norms (and hence distance functions), detectors or other methods to become robust against noise, such as done in [4,8,17,19,22], or to account for, e.g., correlations in the data to define clusters [2]. In light of this, observe that our choice of the Euclidean norm is indeed itself arbitrary, and given the equivalence of all norms, more robust choices are compatible with our constructions.…”
Section: Related Workmentioning
confidence: 93%
“…Notable in relation to our work are methods to design norms (and hence distance functions), detectors or other methods to become robust against noise, such as done in [4,8,17,19,22], or to account for, e.g., correlations in the data to define clusters [2]. In light of this, observe that our choice of the Euclidean norm is indeed itself arbitrary, and given the equivalence of all norms, more robust choices are compatible with our constructions.…”
Section: Related Workmentioning
confidence: 93%
“…With the continuous research on adversarial samples, many effective defense methods [23], [24], [25], [26] have been proposed successively. Adversarial training arguably remains the most effective and promising defense to date, where defenders proactively craft deceptive images for their model and expand the training dataset with such instances to retrain the model.…”
Section: ) Transfer-based Attacksmentioning
confidence: 99%