ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9053076
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on GMM I-Vector Based Speaker Verification Systems

Abstract: This work investigates the vulnerability of Gaussian Mixture Model (GMM) i-vector based speaker verification (SV) systems to adversarial attacks, and the transferability of adversarial samples crafted from GMM i-vector based systems to x-vector based systems. In detail, we formulate the GMM i-vector based system as a scoring function, and leverage the fast gradient sign method (FGSM) to generate adversarial samples through this function. These adversarial samples are used to attack both GMM i-vector and x-vect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
88
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 75 publications
(88 citation statements)
references
References 25 publications
0
88
0
Order By: Relevance
“…To illustrate the importance of live human volunteer, we performed audio replay detection using the model in [16]. We collected 45 audio adversarial examples from the previous studies [5,7,8] and 120 our physical adversarial examples. When performing physical attack, their adversarial examples had to be played by a speaker device, but our attack can be conducted with a live human adversary.…”
Section: Evaluation Of Physical Attacksmentioning
confidence: 99%
See 2 more Smart Citations
“…To illustrate the importance of live human volunteer, we performed audio replay detection using the model in [16]. We collected 45 audio adversarial examples from the previous studies [5,7,8] and 120 our physical adversarial examples. When performing physical attack, their adversarial examples had to be played by a speaker device, but our attack can be conducted with a live human adversary.…”
Section: Evaluation Of Physical Attacksmentioning
confidence: 99%
“…DNNbased ASV models [2,3,4] tend to have excellent performance, but many studies have shown that audio adversarial examples can make the ASV process give wrong decisions [5,6] or let adversary pass verification [7,8]. The transferability of audio adversarial examples across different models was also revealed in [5,6]. Audio adversarial examples could still remain effective after being played over the air in [9].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of the existing adversarial attacks [5] against speaker identification models exploit state-of-the-art methods originally developed for image classification, such as the Fast Gradient Sign Method (FGSM) [6] and its iterative version, Basic Iterative Method (BIM) [7]. Kreuk et al [8] and Li et al [9] explored the vulnerabilities of x-vector and i-vector based speaker verification models to FGSM adversarial attacks. Li et al [10] further integrated an estimate of room impulse responses with FGSM to generate adversarial audio files that may still be effective when played over-the-air against an x-vector based speaker recognition model.…”
mentioning
confidence: 99%
“…Automatic speaker verification (ASV) systems aim at confirming a claimed speaker identity against a spoken utterance, which has been widely applied into commercial devices and authorization tools. However, it is also broadly noticed that malicious attacks can easily degrade a well-developed ASV system, and such attacks may be classified into impersonation [1], replay [1], voice conversion (VC) [2], text-to-speech [3] synthesis (TTS) and the recently emerged adversarial attacks [4,5].…”
Section: Introductionmentioning
confidence: 99%