Emotion Recognition 2015
DOI: 10.1002/9781118910566.ch15
|View full text |Cite
|
Sign up to set email alerts
|

Building a Robust System for Multimodal Emotion Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…In the future, we will investigate how to speed up the input of emotional states in order to enable users to produce prosodic speech in daily environments. One idea would be exploit other modalities, such as facial expressions, to determine the user's emotional state based on our previous work on automated emotion recognition (Wagner et al 2015) and enhance the user's emotional expression by prosodic speech. Furthermore, it would be desirable to offer speech output to the participants that conveys not only emotions in a convincing manner, but also matches their personality.…”
Section: Resultsmentioning
confidence: 99%
“…In the future, we will investigate how to speed up the input of emotional states in order to enable users to produce prosodic speech in daily environments. One idea would be exploit other modalities, such as facial expressions, to determine the user's emotional state based on our previous work on automated emotion recognition (Wagner et al 2015) and enhance the user's emotional expression by prosodic speech. Furthermore, it would be desirable to offer speech output to the participants that conveys not only emotions in a convincing manner, but also matches their personality.…”
Section: Resultsmentioning
confidence: 99%
“…Additionally, interaction technology has to be used with caution in cars to not distract the driver. However, we believe that sensing technologies for automatically recognizing stress, and classifying emotions and activities (e.g., [10,12,20]) could improve a traffic companions impact on wellbeing. Also addressing other Positive Computing factors could help to increase wellbeing.…”
Section: Discussionmentioning
confidence: 99%
“…The classifiers are trained following [29], which allows training AU classifiers using datasets with a reduced amount of ground truth (only prototypical facial expressions are needed). Extraction of paralinguistic affective cues is done following [33]. Extracted facial and paralinguistic cues are combined through fusion strategies in order to generate a final prediction.…”
Section: Multimodal Communication Analysismentioning
confidence: 99%