2022
DOI: 10.3390/technologies10030059
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Emotion Recognition for Long-Term Behavior Modeling through Recurrent Neural Networks

Abstract: One’s internal state is mainly communicated through nonverbal cues, such as facial expressions, gestures and tone of voice, which in turn shape the corresponding emotional state. Hence, emotions can be effectively used, in the long term, to form an opinion of an individual’s overall personality. The latter can be capitalized on in many human–robot interaction (HRI) scenarios, such as in the case of an assisted-living robotic platform, where a human’s mood may entail the adaptation of a robot’s actions. To that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(24 citation statements)
references
References 52 publications
0
10
0
Order By: Relevance
“…The experiment results on the publicly available CAER-S emotion dataset verify not only the effectiveness of each block but also the superiority of our proposed method in the field of context-aware emotion recognition. In the future, we will try to extend our approach to more datasets, including videos, and utilize emotional representations in the dimensional space [62] (e.g., Valance, Arousal, and Dominance) to evaluate the emotional states from multiple perspectives. Additionally, we will integrate the proposed model with its potential applications, such as the analysis of tourist reviews with video clips, or the estimation of job stress levels with visual emotional evidence, or the assessment of mental health with visual media.…”
Section: Discussionmentioning
confidence: 99%
“…The experiment results on the publicly available CAER-S emotion dataset verify not only the effectiveness of each block but also the superiority of our proposed method in the field of context-aware emotion recognition. In the future, we will try to extend our approach to more datasets, including videos, and utilize emotional representations in the dimensional space [62] (e.g., Valance, Arousal, and Dominance) to evaluate the emotional states from multiple perspectives. Additionally, we will integrate the proposed model with its potential applications, such as the analysis of tourist reviews with video clips, or the estimation of job stress levels with visual emotional evidence, or the assessment of mental health with visual media.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, there are three dimensional affective models based on PAD (Pleasure, Arousal, Dominance), such as personalized affective models based on PAD ( Yong & Zhiyu, 2012 ). There are also some other types of emotional models, such as the emotional interaction model of robots in continuous space ( Kansizoglou et al, 2022 ), the emotional model based on Gross’s cognitive reappraisal ( Han, Xie & Liu, 2015 ), the fuzzy emotional reasoning based on incremental adaptive ( Zhang, Jeong & Lee, 2012 ), and the hierarchical autonomy emotional model ( Gómez & Ríos-insua, 2017 ). In addition, the release of robots Pepper ( Pandey & Gelin, 2018 ) from Japan and Sophia ( Rocha, 2017 ) from Hansen Company in the United States has caused a sensation in the field of human-computer interaction, but the specific mechanism of emotion generation is not well understood.…”
Section: Introductionmentioning
confidence: 99%
“…In this study, by introducing the external knowledge graph ( Yong & Zhiyu, 2012 ; Kansizoglou et al, 2022 ) as the background knowledge of robots, this article simulates the awakening process of background knowledge in the process of human communication, and analyzes the emotional friendliness of participants. A human-computer interaction model based on the ripple network of knowledge graph is proposed, aiming at improving the emotional friendliness and coherence of robots in the process of human-computer interaction.…”
Section: Introductionmentioning
confidence: 99%
“…The recognition rate was calculated using the hidden Markov model (HMM) in the literature [8]. Literatures [9][10][11][12][13] use artificial neural networks (ANN), recurrent neural networks (RNN) and convolution neural networks (CNN) to recognize speech emotion. Increasing numbers of models are studied by scholars and the accuracy of recognition can be gradually improved.…”
Section: Introductionmentioning
confidence: 99%