2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops 2009
DOI: 10.1109/acii.2009.5349579
|View full text |Cite
|
Sign up to set email alerts
|

Pleasure-arousal-dominance driven facial expression simulation

Abstract: Expressing and recognizing affective states with respect to facial expressions is an important aspect in perceiving virtual humans as more natural and believable. Based on the results of an empirical study a system for simulating emotional facial expressions for a virtual human has been evolved. This system consists of two parts: (1) a control architecture for simulating emotional facial expressions with respect to Pleasure, Arousal, and Dominance (PAD) values, (2) an expressive output component for animating … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
30
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(30 citation statements)
references
References 26 publications
0
30
0
Order By: Relevance
“…Since the PAD model is originally designed for human emotion evaluation, it is the most popular and recommend way to obtain PAD values through human ratings [4,5,31]. Previous engineering approaches have gain convincible PAD values from human annotation [3] and predefined rules [26,38].…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Since the PAD model is originally designed for human emotion evaluation, it is the most popular and recommend way to obtain PAD values through human ratings [4,5,31]. Previous engineering approaches have gain convincible PAD values from human annotation [3] and predefined rules [26,38].…”
Section: Discussionmentioning
confidence: 99%
“…Despite the limited variability covered in PAD space, we propose to build a general mapping model between PAD descriptors and motion features, which can be easily extended to accommodate higher variability in the PAD continuum. Some preliminary study has shown that it is Visual speech with both prosodic head motion and facial expression ( a the random head motion is also synthesized by the sinusoidal function for each prosodic word, but with random generated amplitude and average position within the statistical range in Table 3, and random "peak" position within the prosodic word) ( b the facial expression in Exp3 is synthesized by the ANN-based predictor) possible to predict the PAD values automatically from different media, such as the backward mapping facial displaying emotions to PAD values [3], and the PAD-PEP-FAP framework can also be reversed to predict the PAD values from facial animations.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Third, the Expression of Empathy by which the virtual human's multimodal behavior is triggered through the modulated empathic emotion. The presented empathy model is applied and evaluated in the context of a conversational agent scenario involving the virtual humans MAX [12] and EMMA [6] and a human interaction partner. Within this scenario, our model is realized for EMMA and allows her to empathize with MAX's emotions during his interaction with the human partner.…”
Section: Introductionmentioning
confidence: 99%