2008
DOI: 10.1007/978-3-540-89796-5_8
|View full text |Cite
|
Sign up to set email alerts
|

Toward Multi-modal Music Emotion Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
38
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(38 citation statements)
references
References 18 publications
0
38
0
Order By: Relevance
“…by Wang, Zhang, and Zhu (2004), hand-selection of prototypical excerpts, e.g. by Liu et al (2006), limitation to one language-thus simplifying lyrics analysis (as done in existing work on lyrics mood analysis (Laurier, Grivolla, & Herrera, 2008;Yang, Lin, Cheng, et al 2008), or limitation to songs where existence of all on-line information was ensured ahead, e.g. by Laurier et al (2008), which all could not be made in a real-life application.…”
Section: Innovations In This Workmentioning
confidence: 99%
“…by Wang, Zhang, and Zhu (2004), hand-selection of prototypical excerpts, e.g. by Liu et al (2006), limitation to one language-thus simplifying lyrics analysis (as done in existing work on lyrics mood analysis (Laurier, Grivolla, & Herrera, 2008;Yang, Lin, Cheng, et al 2008), or limitation to songs where existence of all on-line information was ensured ahead, e.g. by Laurier et al (2008), which all could not be made in a real-life application.…”
Section: Innovations In This Workmentioning
confidence: 99%
“…Classification of music with the intention to facilitate its retrieval from large collections is presented in [37]. They employ audio feature extraction to analyze the contents of musical clips, and process those features by a multimodal fusion process.…”
Section: Classification Of Music Clips According To Emotionsmentioning
confidence: 99%
“…Accordingly, this approach is adopted by numerous works [13,14,11,2,8]. Yang and Lee [13], in one of the earlier works in the field, proposed the combination of lyrics and a number of audio features in order to maximise the classification accuracy and minimise the mean error.…”
Section: Mood Classification Using Audio and Lyricsmentioning
confidence: 99%
“…Nevertheless, the significantly small data corpus (145 songs with lyrics) made the work exploratory to make safe conclusions. In their work, Yang et al [14], extracted a number of low-level acoustic features from a 30 second part of the song that in addition to the lyrics features produced by 3 different approaches (Uni-gram, Probabilistic Latent Semantic Analysis & Bi-gram) are combined by 3 fusion methods. Therein, songs are classified in four categories following the Russell model [15] to conclude that the use of textual features offers a significant accuracy amelioration of the methods examined.…”
Section: Mood Classification Using Audio and Lyricsmentioning
confidence: 99%