2022
DOI: 10.3390/electronics11142183
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attack and Defense Strategies of Speaker Recognition Systems: A Survey

Abstract: Speaker recognition is a task that identifies the speaker from multiple audios. Recently, advances in deep learning have considerably boosted the development of speech signal processing techniques. Speaker or speech recognition has been widely adopted in such applications as smart locks, smart vehicle-mounted systems, and financial services. However, deep neural network-based speaker recognition systems (SRSs) are susceptible to adversarial attacks, which fool the system to make wrong decisions by small pertur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 124 publications
0
7
0
Order By: Relevance
“…The FGSM [13] algorithm is a simple and effective method for generating adversarial examples in deep learning models. By adding a minimal perturbation to the input data in the direction of the gradient of the loss function of the input data, the algorithm can cause the model to misclassify the data, even though the perturbed image may appear similar to the original one to a human observer [20]. The algorithm of the FGSM is summarized in Table 1.…”
Section: Fast Gradient Sign Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The FGSM [13] algorithm is a simple and effective method for generating adversarial examples in deep learning models. By adding a minimal perturbation to the input data in the direction of the gradient of the loss function of the input data, the algorithm can cause the model to misclassify the data, even though the perturbed image may appear similar to the original one to a human observer [20]. The algorithm of the FGSM is summarized in Table 1.…”
Section: Fast Gradient Sign Methodsmentioning
confidence: 99%
“…The algorithm clips the perturbation at each iteration to ensure that its L∞ norm does not exceed the specified size . The I-FGSM can generate more effective adversarial examples than those generated by the FGSM, particularly when combined with other techniques such as momentum or randomization [20,21]. The algorithm of the I-FGSM is presented in Table 2.…”
Section: Iterative Fgsmmentioning
confidence: 99%
“…A thorough assessment of the development of SRSs, including the mainstream frameworks of SRSs, types of adversarial attacks, attack detection strategies, perturbation constraints and objects, defence training methods, refactoring against existing attacks and few commonly used datasets, has been provided by Hao Tan et al [20].…”
Section: Related Workmentioning
confidence: 99%
“…The first type is a proactive approach aimed at increasing the resilience of the ML-based system during the training phase. A second type is a reactive approach, whose main goal is to instantly detect AEs during the inference phase [60].…”
Section: When It Comes To Addressing Potential Protection Against Hos...mentioning
confidence: 99%