2021
DOI: 10.48550/arxiv.2108.13562
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Example Devastation and Detection on Speech Recognition System by Adding Random Noise

Abstract: The automatic speech recognition (ASR) system based on deep neural network is easy to be attacked by an adversarial example due to the vulnerability of neural network, which is a hot topic in recent years. The adversarial example does harm to the ASR system, especially if the commomdependent ASR goes wrong, it will lead to serious consequences. To improve the robustness and security of the ASR system, the defense method against adversarial examples must be proposed. Based on this idea, we propose an algorithm … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…In our experiment, the adversarial sample is quite different from the normal sample. One way is to distinguish the samples by adding noise [56]. Adversarial samples are more susceptible to noise than normal sounds.…”
Section: B Possible Defensementioning
confidence: 99%
“…In our experiment, the adversarial sample is quite different from the normal sample. One way is to distinguish the samples by adding noise [56]. Adversarial samples are more susceptible to noise than normal sounds.…”
Section: B Possible Defensementioning
confidence: 99%