2022
DOI: 10.1109/taffc.2020.3028109
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Componential Approach to Emotion Recognition and the Effect of Personality

Abstract: Emotions are an inseparable part of human nature affecting our behavior in response to the outside world. Although most empirical studies have been dominated by two theoretical models including discrete categories of emotion and dichotomous dimensions, results from neuroscience approaches suggest a multi-processes mechanism underpinning emotional experience with a large overlap across different emotions. While these findings are consistent with the influential theories of emotion in psychology that emphasize a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
14
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 65 publications
3
14
0
1
Order By: Relevance
“…In order to assess the robustness and efficiency of the proposed architecture from a variety of angles, we tested the model on three datasets and conducted a large number of ablation analyses, by this verifying the influence of parameter variables on the predictions. In comparison with earlier studies, we provided a more potent state-of-the-art endto-end model for SER, whose adaptability will encourage the future development of multi-model speech emotion recognition, i. e., by taking advantage of other modalities, such as video and text [68,69,70], Besides, we will also considered how to use chunk-level segments features to create a self-supervised learning framework [71], such as masking some chunk segments during the feature input process and performing contrastive loss on the model output as shown for wav2vec 2.0 [72,73].…”
Section: Discussionmentioning
confidence: 99%
“…In order to assess the robustness and efficiency of the proposed architecture from a variety of angles, we tested the model on three datasets and conducted a large number of ablation analyses, by this verifying the influence of parameter variables on the predictions. In comparison with earlier studies, we provided a more potent state-of-the-art endto-end model for SER, whose adaptability will encourage the future development of multi-model speech emotion recognition, i. e., by taking advantage of other modalities, such as video and text [68,69,70], Besides, we will also considered how to use chunk-level segments features to create a self-supervised learning framework [71], such as masking some chunk segments during the feature input process and performing contrastive loss on the model output as shown for wav2vec 2.0 [72,73].…”
Section: Discussionmentioning
confidence: 99%
“…Film clips provide naturalistic events due to their dynamic nature while being easily implemented in the laboratory [19]. They can induce powerful emotions theorized by both discrete [18,19] and dimensional [12,22] models, including complex emotions such as tenderness or compassion [19] beyond basic categories such as fear or disgust. Their effectiveness for studying componential appraisal models [18,43] is also supported by their impact on physiological bodily states such as heart function [13,43], skin conductance [13,18], and brain activity [12,13,22,23].…”
Section: Materials Selection and Assessmentmentioning
confidence: 99%
“…They can induce powerful emotions theorized by both discrete [18,19] and dimensional [12,22] models, including complex emotions such as tenderness or compassion [19] beyond basic categories such as fear or disgust. Their effectiveness for studying componential appraisal models [18,43] is also supported by their impact on physiological bodily states such as heart function [13,43], skin conductance [13,18], and brain activity [12,13,22,23]. Our study embraced a wide range of discrete theory-based emotions that were induced using film clips to assess the corresponding CPM descriptors.…”
Section: Materials Selection and Assessmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Although additional expressor-related facial features may improve prediction performance, they cannot explain differences across individuals (i.e., the variance above the noise ceiling). Therefore, to explain this substantial amount of variance, we must turn to perceiver-related features, which could also be multiple, including the age, gender, sex, personality, and culture of the perceiver, all of which have been shown to influence the interpretation of facial expressions of emotion (47)(48)(49)(50). Our study showed that the perceiver's culture explains part of this variance and that our culture-aware models removed the initial bias towards WE cultures.…”
Section: Creating More Granular Facial Expression Modelsmentioning
confidence: 99%