2005
DOI: 10.1007/11550617_34
|View full text |Cite
|
Sign up to set email alerts
|

Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs

Abstract: In this paper we present a two-steps approach towards the creation of affective Embodied Conversational Agents (ECAs): annotation of a real-life non-acted emotional corpus and animation by copy-synthesis. The basis of our approach is to study how coders perceive and annotate at several levels the emotions observed in a corpus of emotionally rich TV video interviews. We use their annotations to specify the expressive behavior of an agent at several levels. We explain how such an approach can be useful for provi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
6
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 24 publications
0
15
0
Order By: Relevance
“…In future developments we intend to aggregate more information about these frameworks and to create a system which permits to convert video recording from a speaker to MPEG-4 FBA specification to achieve a more complete evaluation, by comparing real footage with synthetic generated one. Similar work was completed for Greta [30].…”
Section: Discussionmentioning
confidence: 99%
“…In future developments we intend to aggregate more information about these frameworks and to create a system which permits to convert video recording from a speaker to MPEG-4 FBA specification to achieve a more complete evaluation, by comparing real footage with synthetic generated one. Similar work was completed for Greta [30].…”
Section: Discussionmentioning
confidence: 99%
“…We have conducted a study using EmoTV, a corpus of real data made of video clips from French TV news (Martin et al 2006). The people interviewed in the video clips showed complex emotions, which might arise from the evaluation of the same event from different perspectives (Scherer 2000;Devillers et al, 2005). Emotion labels, behaviour descriptions and expressivity dimensions were annotated.…”
Section: Gesture Expressivity In Naturalistic Datamentioning
confidence: 99%
“…The "multiple levels replay" approach involves the level of annotation of emotions, and the low-level annotations of multimodal behaviors (such as the gesture expressivity for assigning values to the expressivity parameters of the ECA, and the manual annotation of facial expressions) [25]. The "facial blending replay" approach is identical to the "multiple levels replay" approach except for facial expressions: it uses a computational model for generating facial expressions of blend of emotions [25]. More details are provided below on how these two approaches have been used in our perceptual study.…”
Section: Annotating and Replaying Multimodal Emotional Behaviorsmentioning
confidence: 99%