2015
DOI: 10.1109/tmm.2015.2478068
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning and Music Adversaries

Abstract: Abstract-An adversary is essentially an algorithm intent on making a classification system perform in some particular way given an input, e.g., increase the probability of a false negative. Recent work builds adversaries for deep learning systems applied to image object recognition, which exploits the parameters of the system to find the minimal perturbation of the input image such that the network misclassifies it with high confidence. We adapt this approach to construct and deploy an adversary of deep learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
83
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 114 publications
(86 citation statements)
references
References 43 publications
(107 reference statements)
3
83
0
Order By: Relevance
“…Also, we can see that the multi-level and multi-scale aggregation technique generally improves the performance, particularly in GTZAN. [24] 0.632 ---Temporal features [34] 0.659 ---CNN using artist-labels [35] 0 …”
Section: Transfer Learning and Comparison To State-of-the-artsmentioning
confidence: 99%
“…Also, we can see that the multi-level and multi-scale aggregation technique generally improves the performance, particularly in GTZAN. [24] 0.632 ---Temporal features [34] 0.659 ---CNN using artist-labels [35] 0 …”
Section: Transfer Learning and Comparison To State-of-the-artsmentioning
confidence: 99%
“…We used AUC (Area Under Receiver Operating Characteristic) as a primary evaluation metric for music autotagging. In addition, we conducted genre classification tasks, GTZAN [16] (10 genres, fault-filtered split that is designed to avoid the repetition of artist across training/validation/test list [17]) and Tagtraum genre annotations on MSD (15 genres, stratified split with 80% training data of CD2C version) [18], in a transfer learning setting where the pre-trained CNNs with MSD are used as feature extractors.…”
Section: A Datasetsmentioning
confidence: 99%
“…Thus, it is more convenient for resource management. When adding or deleting resource, it only needs to modify the corresponding memory address, so as to inquire and use resources more intuitively and realize the ore convenient operation [10]. Realization block diagram of test questions management function is as shown in figure 6.…”
Section: Realization Of System Question Bank Management Programmentioning
confidence: 99%