2020
DOI: 10.48550/arxiv.2002.05463
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Identifying Audio Adversarial Examples via Anomalous Pattern Detection

Victor Akinwande,
Celia Cintas,
Skyler Speakman
et al.

Abstract: Audio processing models based on deep neural networks are susceptible to adversarial attacks even when the adversarial audio waveform is 99.9% similar to a benign sample. Given the wide application of DNN-based audio recognition systems, detecting the presence of adversarial examples is of high practical relevance. By applying anomalous pattern detection techniques in the activation space of these models, we show that 2 of the recent and current state-of-the-art adversarial attacks on audio processing systems … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…We chose these to cover possible real world settings where ASR systems are employed and may be exposed to adversarial attacks, and these noise files are readily available online, making the experiments reproducible. We mix noise with speech at [0, 5,10,15,20] dB SNR levels.…”
Section: E Detection In Noisy Environmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…We chose these to cover possible real world settings where ASR systems are employed and may be exposed to adversarial attacks, and these noise files are readily available online, making the experiments reproducible. We mix noise with speech at [0, 5,10,15,20] dB SNR levels.…”
Section: E Detection In Noisy Environmentsmentioning
confidence: 99%
“…The works in [11,12,13,14] defend against audio adversarial attacks by preprocessing a speech signal prior to passing it onto the ASR system. An unsupervised method with no need for labelled attacks is presented in [15], where the defence is realised using anomalous pattern detection. Rather than detecting adversarial examples, the work in [16] characterises them using temporal dependencies.…”
Section: Introductionmentioning
confidence: 99%
“…Target Generality Knowledge [8], [15], [24], [38], [43], [63], [64], [73], [120], [124] Before Sensor Universal None [7], [31], [36], [45], [49], [61], [87], [88], [93], [103], [110], [140], [141], [145], [147] Between Sensor and ASR Specific Partial [105], [106], [122], [123], [133] Inside ASR Specific Full perturbation cancellation [36], [45], [49], [110], adding distortion [61], [87], [103], signal smoothing [45], audio compression [31], [88], [147]) to destruct the adversarial perturbation (if any) to protect ASR systems. Other works apply an extra detection network [7], [46], [93], [140], [141] or multi-model detection mechanism [145]. However, those defense s...…”
Section: Defensementioning
confidence: 99%