2011
DOI: 10.1109/tasl.2010.2090148
|View full text |Cite
|
Sign up to set email alerts
|

Time Series Models for Semantic Music Annotation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(31 citation statements)
references
References 21 publications
0
31
0
Order By: Relevance
“…In the first stage, such algorithms infer higherlevel information from content features, such as term weight vector representations. These representations are then fed into a machine learning algorithm to learn semantic labels [9,17]. For instance, Miotto et al [17] first model semantic multinomials over tags based on music content features.…”
Section: Related Workmentioning
confidence: 99%
“…In the first stage, such algorithms infer higherlevel information from content features, such as term weight vector representations. These representations are then fed into a machine learning algorithm to learn semantic labels [9,17]. For instance, Miotto et al [17] first model semantic multinomials over tags based on music content features.…”
Section: Related Workmentioning
confidence: 99%
“…Although quite a few MIR researchers suggest such a combination [2,1,3,12,5], a systematic evaluation of combining state-of-the-art audio and web similarity estimators is still missing, hence provided here.…”
Section: Hybrid Music Retrievalmentioning
confidence: 99%
“…The protocol defines training and evaluation sets, which consist of 729 audio files each. The experiments on music tagging were conducted following the experimental procedure defined in [26]. That is, 78 tags, which have been employed to annotate at least 50 music recordings in the CAL500 dataset, were used in the experiments by applying fivefold cross-validation.…”
Section: Datasets and Evaluation Proceduresmentioning
confidence: 99%
“…That is, F 1 = 2 · precision·recall precision+recall yields a scalar measure of overall annotation performance. If a tag is never selected for annotation, then following [1,26], the corresponding precision (that otherwise would be undefined) is set to the tag prior to the training set, which equals the performance of a random classifier. In the music tagging experiments, the length of the class indicator vector returned by the LRSMs as well as the MLSRC, the Rank-SVMs, the MLkNN, and the PARAFAC2-based autotagging method was set to 10 as in [1,26].…”
Section: Datasets and Evaluation Proceduresmentioning
confidence: 99%
See 1 more Smart Citation