2023
DOI: 10.1109/taffc.2021.3093787
|View full text |Cite
|
Sign up to set email alerts
|

Examining Emotion Perception Agreement in Live Music Performance

Abstract: Current music emotion recognition (MER) systems rely on emotion data averaged across listeners and over time to infer the emotion expressed by a musical piece, often neglecting time-and listener-dependent factors. These limitations can restrict the efficacy of MER systems and cause misjudgements. We present two exploratory studies on music emotion perception. First, in a live music concert setting, fifteen audience members annotated perceived emotion in the valence-arousal space over time using a mobile applic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 95 publications
0
10
0
Order By: Relevance
“…This task seeks to narrow the so-called semantic gap between high-level musical constructions/concepts and low-level, handcrafted representations of sound -centering it around the processing of musical content (e.g., acoustic features and lyrics). However, more recent studies have argued that user-aware music retrieval systems should include user-related factors [2][3][4][5]: user context -fluctuating characteristics from the listener, and user properties -features from the listener which are more constant. In short, user context involves collecting data regarding listening mood, uses of music, or physiological signals, while user properties include demographics, musical experience, or preference.…”
Section: Trompa-mer: An Open Dataset For Personalized Music Emotion R...mentioning
confidence: 99%
See 2 more Smart Citations
“…This task seeks to narrow the so-called semantic gap between high-level musical constructions/concepts and low-level, handcrafted representations of sound -centering it around the processing of musical content (e.g., acoustic features and lyrics). However, more recent studies have argued that user-aware music retrieval systems should include user-related factors [2][3][4][5]: user context -fluctuating characteristics from the listener, and user properties -features from the listener which are more constant. In short, user context involves collecting data regarding listening mood, uses of music, or physiological signals, while user properties include demographics, musical experience, or preference.…”
Section: Trompa-mer: An Open Dataset For Personalized Music Emotion R...mentioning
confidence: 99%
“…MER has been subject to extensive criticism given the ambiguous and subjective nature of emotions in music (see [6,[15][16][17][18]). Namely, different listeners are likely to provide diverse emotional judgments due to several factors: (1) intrinsic constructions of music (e.g., lyrics content and style), (2) sociocultural conventions (e.g., functionality of music), (3) personal differences (e.g., listener's mood, preferences, personality, and musical experience), (4) highlevel emotional evaluation mechanisms (e.g., language differences, aesthetic experience, familiarity, episodic memory, and identity confirmation), and (5) generalized confusion between the concepts of induced and perceived emotions in music. We stress the distinction between these concepts: perceived emotions are recognized by the listener through judgment/interpretation of musical properties (e.g., Western happy music is typically in major mode and has fast tempo); induced emotions are felt by the listener and involve psychophysiological responses to music (e.g., happy music might induce sadness when triggering a nostalgic memory).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This task seeks to narrow the so-called semantic gap between high-level musical constructions/concepts and low-level, handcrafted representations of sound -centering it around the processing of musical content (e.g., acoustic features and lyrics). However, more recent studies have argued that user-aware music retrieval systems should include user-related factors (Barthet et al, 2013;Schedl et al, 2013;Zangerle et al, 2021;Yang et al, 2021): user context -fluctuating characteristics from the listener, and user properties -features from the listener which are more constant. In short, user context involves collecting data regarding listening mood, uses of music, or physiological signals, while user properties include demographics, musical experience, or preference.…”
Section: Introductionmentioning
confidence: 99%
“…Major-minor tonality has long been present in Western Classical Music, providing means for shaping the mood of musical compositions. Nowadays, it plays an important role in many music-related algorithms or systems, for example music genre recognition [1][2][3], assessment of musical tension [4][5][6][7], music visualization systems [8][9][10][11], music data mining [6], computer-aided composition software [12][13][14], and determining the harmonic structure of created pieces [15,16]. It is worth mentioning that in recent years, methods implementing neural networks [17][18][19] and other machine learning approaches [20][21][22] have gained popularity in many areas of music analysis.…”
Section: Introductionmentioning
confidence: 99%