Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia 2013
DOI: 10.1145/2506364.2506365
|View full text |Cite
|
Sign up to set email alerts
|

1000 songs for emotional analysis of music

Abstract: Music is composed to be emotionally expressive, and emotional associations provide an especially natural domain for indexing and recommendation in today's vast digital music libraries. But such libraries require powerful automated tools, and the development of systems for automatic prediction of musical emotion presents a myriad challenges. The perceptual nature of musical emotion necessitates the collection of data from human subjects. The interpretation of emotion varies between listeners thus each clip need… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
72
0
4

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 143 publications
(79 citation statements)
references
References 16 publications
0
72
0
4
Order By: Relevance
“…MediaEval 2013 data, also known as "1000 songs" dataset [29], is a set created for machine learning benchmarking. It consists of 744 (after duplicate removal) unique song excerpts.…”
Section: Complexitymentioning
confidence: 99%
“…MediaEval 2013 data, also known as "1000 songs" dataset [29], is a set created for machine learning benchmarking. It consists of 744 (after duplicate removal) unique song excerpts.…”
Section: Complexitymentioning
confidence: 99%
“…This module implements a method for allowing identification and extraction of arousal, valence pairs from a musical track. The emotion recognition capabilities have been implemented through Support Vector Machines (SVMs), trained with the dataset described in [18]. It includes approximately 1000 CC-licensed songs that have been listened and subsequently annotated with their values of arousal and valence through crowdsourcing.…”
Section: Use Casementioning
confidence: 99%
“…It includes approximately 1000 CC-licensed songs that have been listened and subsequently annotated with their values of arousal and valence through crowdsourcing. The training process uses 34 musical features extracted from each audio track of the aforementioned dataset, which constitute the training set inputs (see [18] for more details). Obviously, the corresponding arousal and valence annotations are used as outputs.…”
Section: Use Casementioning
confidence: 99%
“…The experiment was designed with a fixed duration of six minutes and 45 seconds. Six songs of 45 seconds were selected from a free music database [11] characterized by the emotional reaction. This duration was elected for an average speed of 35 km/h and an adequate duration of the test without an effect of fatigue that could affect the results.…”
Section: Ictte 2017mentioning
confidence: 99%