2020
DOI: 10.1177/0305735620928422
|View full text |Cite
|
Sign up to set email alerts
|

Effects of individual factors on perceived emotion and felt emotion of music: Based on machine learning methods

Abstract: Music emotion information is widely used in music information retrieval, music recommendation, music therapy, and so forth. In the field of music emotion recognition (MER), computer scientists extract musical features to identify musical emotions, but this method ignores listeners’ individual differences. Applying machine learning methods, this study formed relations among audio features, individual factors, and music emotions. We used audio features and individual features as inputs to predict the pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 32 publications
(30 citation statements)
references
References 59 publications
0
30
0
Order By: Relevance
“…The items were grounded in the theoretical literature, but many alternative measurement tools are available. There is also an IMPACT OF BIO 30 ongoing debate as to the difference between perceived and felt emotions (Xu et al, 2020), which should be taken into account in future research.…”
Section: Discussionmentioning
confidence: 99%
“…The items were grounded in the theoretical literature, but many alternative measurement tools are available. There is also an IMPACT OF BIO 30 ongoing debate as to the difference between perceived and felt emotions (Xu et al, 2020), which should be taken into account in future research.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, Yang et al have advocated to consider the individualities in MER studies [11]. Following Yang's work, many MER studies have investigated individual differences, including testing the effects of individual factors [13] and constructing personalized MER models [97], [98].…”
Section: From Music Emotion Recognition To Music Preference Predictionmentioning
confidence: 99%
“…To ensure the stability of perceived emotions, the collected music excerpts were trimmed to 25 seconds in this study. Following the approach of [11], [13], the excerpts were then converted to a uniform format: 22,050 Hz, 16 bits, and mono channel PCM WAV. To objectively evaluate the degree of sadness of the songs, the perceived sadness rating of each excerpt was evaluated on a scale from 1 (not at all) to 5 (very much).…”
Section: ) Musical Stimulimentioning
confidence: 99%
See 2 more Smart Citations