2021 IEEE Symposium on Security and Privacy (SP) 2021
DOI: 10.1109/sp40001.2021.00004
|View full text |Cite
|
Sign up to set email alerts
|

Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems

Abstract: Speaker recognition (SR) is widely used in our daily life as a biometric authentication or identification mechanism. The popularity of SR brings in serious security concerns, as demonstrated by recent adversarial attacks. However, the impacts of such threats in the practical black-box setting are still open, since current attacks consider the white-box setting only.In this paper, we conduct the first comprehensive and systematic study of the adversarial attacks on SR systems (SRSs) to understand their security… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
152
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 123 publications
(155 citation statements)
references
References 65 publications
3
152
0
Order By: Relevance
“…As shown in Section IV, one straightforward countermeasure by attackers is to reduce the perturbation threshold so that the standard deviation of perturbations (i.e., σ) can be less than σ D in Equation (23). However, as shown in [3] and in our experiments in Section VI-B, when decreases, the attack success rate of an adversarial attack would decrease as well. That is, a small value of can let adversarial attacks reduce the attacking power in the first place.…”
Section: A Reducing the Perturbation Thresholdmentioning
confidence: 49%
See 4 more Smart Citations
“…As shown in Section IV, one straightforward countermeasure by attackers is to reduce the perturbation threshold so that the standard deviation of perturbations (i.e., σ) can be less than σ D in Equation (23). However, as shown in [3] and in our experiments in Section VI-B, when decreases, the attack success rate of an adversarial attack would decrease as well. That is, a small value of can let adversarial attacks reduce the attacking power in the first place.…”
Section: A Reducing the Perturbation Thresholdmentioning
confidence: 49%
“…An attacker can design an adversarial example attack to make the SV system falsely accept an illegal user as a legitimate user. FAKEBOB is the state-of-the-art black-box adversarial attack against popular score-based SV systems such as GMM and i-vector [3]. The basic idea of FAKEBOB attacks is to find small perturbations p[n], so that an SV system would reject s should be small.…”
Section: B Fakebob Attacks Against Speaker Verification Systemsmentioning
confidence: 99%
See 3 more Smart Citations