2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) 2018
DOI: 10.1109/isspit.2018.8642623
|View full text |Cite
|
Sign up to set email alerts
|

Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition

Abstract: Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in spe… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
30
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(30 citation statements)
references
References 11 publications
0
30
0
Order By: Relevance
“…Rajaratnam et al [9] proposed to detect audio AEs based on audio pre-processing methods. Yet, if an attacker knows the detection details, he can take the pre-processing effect into account when generating AEs.…”
Section: B Audio Adversarial Example Defense and Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…Rajaratnam et al [9] proposed to detect audio AEs based on audio pre-processing methods. Yet, if an attacker knows the detection details, he can take the pre-processing effect into account when generating AEs.…”
Section: B Audio Adversarial Example Defense and Detectionmentioning
confidence: 99%
“…However, as admitted by the authors [8], this method cannot handle "adaptive attacks", which may evade the detection by embedding a malicious command into one section alone. Rajaratnam et al [9] proposed detection based on audio pre-processing methods. Yet, if an attacker knows the detection details, he can take the pre-processing effect into account when generating AEs.…”
Section: Introductionmentioning
confidence: 99%
“…For the detection strategy, Samizade et al [24] designed a convolution neural network (CNN) based method to detect adversarial examples. Rajaratnam et al [25] detected adversarial examples by adding random noise to different frequency bands of speech. Rajaratnam et al [26] also proposed a method to detect speech adversarial examples by comparing the differences between adversarial examples and normal examples in feature space.…”
Section: Introductionmentioning
confidence: 99%
“…signal transformation [22] and obfuscated gradients [23], have also been investigated, but they provide rather limited robustness improvement in face of advanced attacks. Audio preprocessing methods and their ensemble for defense against black-box attacks are studied in [24]. In [25], noise flooding is applied to signals for defensing against black-box examples.…”
Section: Introductionmentioning
confidence: 99%
“…Audio preprocessing methods and their ensemble for defense against black-box attacks are studied in [24]. In [25], noise flooding is applied to signals for defensing against black-box examples. All these methods are concerned with modifying input signals and testing the behaviours of the recognition model and have moderate success.…”
Section: Introductionmentioning
confidence: 99%