ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9413468
|View full text |Cite
|
Sign up to set email alerts
|

Backdoor Attack Against Speaker Verification

Abstract: Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted thirdparty data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verificatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 74 publications
(32 citation statements)
references
References 18 publications
0
32
0
Order By: Relevance
“…Beyond computer vision tasks (e.g., image classification, face recognition, etc. ), backdoor attacks have also been successfully applied to other domains, including natural language processing (NLP) [470,1016], reinforcement learning [1017], and speech recognition [1018]. For example, in NLP, a backdoor trigger can be realized by modifying a particular character, word, or sentence in the training dataset [470], such that the model behaves as the adversary specifies whenever the trigger appears, similar to BadNets.…”
Section: Backdoor Attacksmentioning
confidence: 99%
“…Beyond computer vision tasks (e.g., image classification, face recognition, etc. ), backdoor attacks have also been successfully applied to other domains, including natural language processing (NLP) [470,1016], reinforcement learning [1017], and speech recognition [1018]. For example, in NLP, a backdoor trigger can be realized by modifying a particular character, word, or sentence in the training dataset [470], such that the model behaves as the adversary specifies whenever the trigger appears, similar to BadNets.…”
Section: Backdoor Attacksmentioning
confidence: 99%
“…Then, for each frame, we apply a chain of calculations that results in the MFCCs. Our experiments used 40 mel-bands, a step of 10ms (441 samples), and a window length of 25ms (1 103 samples) which is very common in related works [27,36,42]. The shape of the input tensor is 100×40 (⌈…”
Section: Transforming the Featuresmentioning
confidence: 99%
“…As one of the powerful adversarial attacks in neural networks [30], backdoor attacks have been implemented in many domains, e.g., image classification models [7], text classification models [8], and graph domain [38,39]. Several studies showed that ASR is also vulnerable to backdoor attacks [23,40,42].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thanks to previous efforts [2][3][4][5], ASV is now a well-developed technology for biometric identification and widely adopted in a variety of security-critic applications, such as banking and access control. But, ASV models with high performance are vulnerable to spoofing audios [6] generated by audio replay, text-to-speech and voice conversion, back-door attacks [7], and related emerged adversarial attacks [8,9]. In this paper, we mainly focus on tackling the adversarial attacks.…”
Section: Introductionmentioning
confidence: 99%