2020
DOI: 10.48550/arxiv.2007.14714
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

End-to-End Adversarial White Box Attacks on Music Instrument Classification

Katharina Prinz,
Arthur Flexer

Abstract: Small adversarial perturbations of input data are able to drastically change performance of machine learning systems, thereby challenging the validity of such systems. We present the very first end-to-end adversarial attacks on a music instrument classification system allowing to add perturbations directly to audio waveforms instead of spectrograms. Our attacks are able to reduce the accuracy close to a random baseline while at the same time keeping perturbations almost imperceptible and producing misclassific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…Five pretrained voiceprint classification models, namely 1DCNN Rand, 1DCNN Gamma, ENVnet-V2, Sincnet, and SincNet+VGG19, are employed on the UrbanSound8k dataset [10], applying the IG-UAP method to evaluate their performance in untargeted and targeted attacks. Implementation and experimental comparisons were performed on four existing voiceprint universal adversarial perturbation generation methods, which include FGSM-UAP [11], PGD-UAP [12], C&W-UAP [13], and MSCW-UAP [14]. Table 2 displays the experimental environment.…”
Section: Experimental Analysismentioning
confidence: 99%
“…Five pretrained voiceprint classification models, namely 1DCNN Rand, 1DCNN Gamma, ENVnet-V2, Sincnet, and SincNet+VGG19, are employed on the UrbanSound8k dataset [10], applying the IG-UAP method to evaluate their performance in untargeted and targeted attacks. Implementation and experimental comparisons were performed on four existing voiceprint universal adversarial perturbation generation methods, which include FGSM-UAP [11], PGD-UAP [12], C&W-UAP [13], and MSCW-UAP [14]. Table 2 displays the experimental environment.…”
Section: Experimental Analysismentioning
confidence: 99%