2021
DOI: 10.48550/arxiv.2107.14569
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

Abstract: Deep neural networks represent a powerful option for many realworld applications due to their ability to model even complex data relations. However, such neural networks can also be prohibitively expensive to train, making it common to either outsource the training process to third parties or use pretrained neural networks. Unfortunately, such practices make neural networks vulnerable to various attacks, where one attack is the backdoor attack. In such an attack, the third party training the model may maliciou… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…Moreover, they demonstrate that existing backdoor attacks cannot be used to attack speaker verification directly. This issue was further explored by Koffas et al [95], who used ultrasound, which is inaudible to the human ear, as a trigger, and two versions of the speech dataset and three neural networks on which experiments were conducted to explore the performance of the attack in terms of duration, location, and trigger type. The experiments find that short and discontinuous triggers lead to successful attacks.…”
Section: Volume mentioning
confidence: 99%
“…Moreover, they demonstrate that existing backdoor attacks cannot be used to attack speaker verification directly. This issue was further explored by Koffas et al [95], who used ultrasound, which is inaudible to the human ear, as a trigger, and two versions of the speech dataset and three neural networks on which experiments were conducted to explore the performance of the attack in terms of duration, location, and trigger type. The experiments find that short and discontinuous triggers lead to successful attacks.…”
Section: Volume mentioning
confidence: 99%
“…METHODOLOGY A. Dataset and Features a) Sound Recognition: We used two versions of the Speech Commands dataset, one with ten classes and another with thirty classes. Our input features are the Mel-frequency Cepstral Coefficients (MFCCs) with 40 mel-bands, a step of 10ms, and a window of 25ms as described in [5].…”
Section: B Global Average Poolingmentioning
confidence: 99%
“…1) Sound Recognition: We used two versions of the large and small CNNs described in [5]. We replaced three consecutive layers in the large CNN, i.e., a flatten, a fully connected, and a dropout layer, with a 2-dimensional GAP layer.…”
Section: B Neural Network Architecturesmentioning
confidence: 99%
“…5 Due to their practical relevance, various scientific articles have been published on training-time attacks against ML models. While the vast majority of the poisoning literature focuses on supervised classification models in the computer vision domain, we would like to remark here that data poisoning has been investigated earlier in cybersecurity [125,133], and more recently also in other application domains, like audio [1,90] and natural language processing [34,205], and against different learning methods, such as federated learning [4,189], unsupervised learning [17,41], and reinforcement learning [10,204]. 1 https://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot 2 https://www.khaleejtimes.com/technology/ai-getting-out-of-hand-chinese-chatbots-re-educated-after-rogue-rants 3 https://www.vice.com/en/article/akd4g5/ai-chatbot-shut-down-after-learning-to-talk-like-a-racist-asshole 4 http://www.nickdiakopoulos.com/2013/08/06/algorithmic-defamation-the-case-of-the-shameless-autocomplete/ 5 https://www.timebulletin.com/jewish-baby-stroller-image-algorithm/ Within this survey paper, we provide a comprehensive framework for threat modeling of poisoning attacks and categorization of defenses.…”
Section: Introductionmentioning
confidence: 99%