2020
DOI: 10.1109/tifs.2019.2956591
|View full text |Cite
|
Sign up to set email alerts
|

A Robust Approach for Securing Audio Classification Against Adversarial Attacks

Abstract: Adversarial audio attacks can be considered as a small perturbation unperceptive to human ears that is intentionally added to an audio signal and causes a machine learning model to make mistakes. This poses a security concern about the safety of machine learning models since the adversarial attacks can fool such models toward the wrong predictions. In this paper we first review some strong adversarial attacks that may affect both audio signals and their 2D representations and evaluate the resiliency of deep le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
47
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 61 publications
(47 citation statements)
references
References 43 publications
0
47
0
Order By: Relevance
“…Thus far, the best sound classification accuracy has been achieved for deep learning algorithms trained on 2D signal representations [1,2]. However, it has been shown that despite achieving high performance, the approaches based on 2D representations are very vulnerable against adversarial attacks [3]. Unfortunately, this poses a strict security issue because crafted adversarial examples not only mislead the target model toward a wrong label, but also, they are transferable to other models including conventional algorithms such as support vector machines (SVM) [3].…”
Section: Introductionmentioning
confidence: 99%
“…Thus far, the best sound classification accuracy has been achieved for deep learning algorithms trained on 2D signal representations [1,2]. However, it has been shown that despite achieving high performance, the approaches based on 2D representations are very vulnerable against adversarial attacks [3]. Unfortunately, this poses a strict security issue because crafted adversarial examples not only mislead the target model toward a wrong label, but also, they are transferable to other models including conventional algorithms such as support vector machines (SVM) [3].…”
Section: Introductionmentioning
confidence: 99%
“…A cascade regression of extreme learning machine is first introduced and its parallel version is developed to train a offline model. Then, we present an efficient method to incrementally update a trained model to make it more generalizable [16], [43] and new optimization strategy [44], [45].…”
Section: Discussionmentioning
confidence: 99%
“…The motivation behind proposing a shallow approach instead of a deep architecture as a front-end classifier is twofold. First, it has been shown that advanced deep neural networks such as AlexNet, GoogLeNet and other recent architectures are highly vulnerable to adversarial attacks as they can predict wrong labels with high confidence [60,61]. Secondly, conventional classifiers such as SVMs and RFs, which learn from handcrafted features are considerably more robust against such adversarial attacks than deep learning models [60].…”
Section: Unsupervised Feature Learning and Classificationmentioning
confidence: 99%
“…First, it has been shown that advanced deep neural networks such as AlexNet, GoogLeNet and other recent architectures are highly vulnerable to adversarial attacks as they can predict wrong labels with high confidence [60,61]. Secondly, conventional classifiers such as SVMs and RFs, which learn from handcrafted features are considerably more robust against such adversarial attacks than deep learning models [60]. Taking advantage of these two facts, we propose a conventional data-driven model as front-end classifier and use a generative model based on a deep architecture as a back-end classifier for data augmentation purposes only.…”
Section: Unsupervised Feature Learning and Classificationmentioning
confidence: 99%