2017
DOI: 10.1371/journal.pone.0173392
|View full text |Cite
|
Sign up to set email alerts
|

Developing a benchmark for emotional analysis of music

Abstract: Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
113
1
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 182 publications
(117 citation statements)
references
References 40 publications
2
113
1
1
Order By: Relevance
“…The results in Section 3.2, though only indicative, pose some research questions that can form the basis of future work. The inconsistency in rating may be due to the small duration of the pieces: the authors of [21] ignore annotations of the first five seconds of music. With intra-rāga variation [8], the clipduration should be long enough for the emotion to be perceived, but short enough to evoke a single emotion.…”
Section: Discussionmentioning
confidence: 99%
“…The results in Section 3.2, though only indicative, pose some research questions that can form the basis of future work. The inconsistency in rating may be due to the small duration of the pieces: the authors of [21] ignore annotations of the first five seconds of music. With intra-rāga variation [8], the clipduration should be long enough for the emotion to be perceived, but short enough to evoke a single emotion.…”
Section: Discussionmentioning
confidence: 99%
“…The annotators were paid $8 per hour to rate the valence and arousal separately via the crowdsourcing platform Amazon's Mechanical Turk (MTurk), and the annotators' background was not considered. Each song was annotated by five to ten people, and we used the average of the annotations [37][38][39].…”
Section: Datasetmentioning
confidence: 99%
“…MediaEval is a communitydriven benchmarking campaign dedicated to evaluating algorithms for social and human-centered multimedia access and retrieval [63]. Unlike MIREX, "Emotion in Music" task focused on dynamic emotion recognition in music tracking arousal and valence over time [6,115]. The data from MediaEval tasks were compiled in MediaEval Database for Emotional Analysis in Music (DEAM) which is the largest available dataset with dynamic annotations, at 2Hz, with valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons license.…”
Section: Datasets For Ac Of Musicmentioning
confidence: 99%
“…Mood clusters [32], dimensional representations such as arousal, tension and valence as well as music specific emotion representation can be used. An analysis of the methods proposed for MediaEval "Music in Emotion" task submissions revealed that using deep learning accounted for the superior performance for emotion recognition much more than the choice features [6]. Recent methods for emotion recognition in music rely on deep learning and often use spectrogram features that are converted to images [5].…”
Section: Affective Computing Of Musicmentioning
confidence: 99%