2019 53rd Annual Conference on Information Sciences and Systems (CISS) 2019
DOI: 10.1109/ciss.2019.8692918
|View full text |Cite
|
Sign up to set email alerts
|

Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach

Abstract: Machine Learning models are vulnerable to adversarial attacks that rely on perturbing the input data. This work proposes a novel strategy using Autoencoder Deep Neural Networks to defend a machine learning model against two gradient-based attacks: The Fast Gradient Sign attack and Fast Gradient attack. First we use an autoencoder to denoise the test data, which is trained with both clean and corrupted data. Then, we reduce the dimension of the denoised data using the hidden layer representation of another auto… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
21
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(23 citation statements)
references
References 10 publications
1
21
0
1
Order By: Relevance
“…These autoencoders are specifically designed to compress data effectively and reduce dimensions. Hence, it may not be wholly generalized, and training with corrupted data requires a lot of adjustments to get better test results [33]. Their model provide that when test data is preprocessed using this cascading, the tested deep neural network classifier provides much higher accuracy, thus mitigating the effect of the adversarial perturbation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These autoencoders are specifically designed to compress data effectively and reduce dimensions. Hence, it may not be wholly generalized, and training with corrupted data requires a lot of adjustments to get better test results [33]. Their model provide that when test data is preprocessed using this cascading, the tested deep neural network classifier provides much higher accuracy, thus mitigating the effect of the adversarial perturbation.…”
Section: Related Workmentioning
confidence: 99%
“…Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach [33] They have used an autoencoder to denoise the test data which is trained with both corrupted and normal data. Then they reduce the dimension of the denoised data.…”
Section: White Box Attacksmentioning
confidence: 99%
“…One such use of convolutional auto-encoders to purify AEs has been described in [8]. As shown by [16] dimensionality reduction using an autoencoder is also an effective defense against adversarial attacks. The method uses autoencoder to perform denoising of FGSM attack on MNIST dataset.…”
Section: Denoising Of Adversarial Examplesmentioning
confidence: 99%
“…Often, these methods are, without modification, employed to mitigate attacks generated using a disparate classifier's gradient with less success than in the case where the attack is based on the victim classifier's gradient [2]. For example, defenses based on Principal Components Analysis (PCA) [3], autoencoder-based dimensionality reduction [4], [5], and denoising autoencoders [5] suffer a severe degradation of performance in architecture mismatch settings. Recent work has proposed training multiple DAEs (for filtering instead of dimensionality reduction) and randomly selecting one as a defense at test time [6].…”
Section: Introductionmentioning
confidence: 99%