2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9207212
|View full text |Cite
|
Sign up to set email alerts
|

Regression-based Music Emotion Prediction using Triplet Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 22 publications
0
8
0
1
Order By: Relevance
“…In many affective computing studies [30,[57][58][59][60][61][62]], Russell's Circumplex model of affect [28] is applied to represent human affective responses. According to this model, we can map human emotions into a space, which includes valence and arousal dimensions.…”
Section: Emotion Representation Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…In many affective computing studies [30,[57][58][59][60][61][62]], Russell's Circumplex model of affect [28] is applied to represent human affective responses. According to this model, we can map human emotions into a space, which includes valence and arousal dimensions.…”
Section: Emotion Representation Modelsmentioning
confidence: 99%
“…In addition to neural network architectures, such as SoundNet [88], VGGish, Inception, ResNet and AlexNet [35], we can use toolkits, such as OpenSMILE [36] and YAAFE [89] for audio feature extraction. Audio features obtained by using the OpenSMILE toolkit, are proven to be effective for the emotion prediction task [18,21,61]. In [90] models using VGGish-extracted features outperform those that use features extracted by applying Sound-Net, and the OpenSMILE toolkit.…”
Section: Audio Modalitymentioning
confidence: 99%
“…In music performance, the contrast of the on-site atmosphere is mainly realized by lighting, which is often changed with the emotional factors expressed in the music to assist the music to create a good stage effect. In this context, in order to better control the light, multimodal music emotion recognition is very important [13][14][15]. erefore, aiming at multimodal music emotion, a classification and recognition model is constructed to complete the research on intelligent recognition and classification of multimodal music emotion in music performance system.…”
Section: Multimodal Music Emotion Recognition and Classification Based On Image Sequencementioning
confidence: 99%
“…Human-in-the-loop To evaluate and measure a model's performance, we typically use ground truth labels. For example, if we are doing emotion prediction from music, we need experts to label some music with emotions (Cheuk, Luo, Balamurali, Roig, & Herremans, 2020). In a traditional context, we use supervised learning with a split training and test.…”
Section: Technologiesmentioning
confidence: 99%