2017
DOI: 10.1145/3127590
|View full text |Cite
|
Sign up to set email alerts
|

Mimebot—Investigating the Expressibility of Non-Verbal Communication Across Agent Embodiments

Abstract: Unlike their human counterparts, arti cial agents such as robots and game characters may be deployed with a large variety of face and body con gurations. Some have articulated bodies but lack facial features, and others may be talking heads ending at the neck. Generally, they have many fewer degrees of freedom than humans through which they must express themselves, and there will inevitably be a ltering e ect when mapping human motion onto the agent. In this paper, we investigate ltering e ects on three types … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Apart from the speaker identity, the emotional or affective state of a speaker also impacts the gestures performed by them. A striking example of this is the large range of expressive motion variation with the same lexical message explored in the Mimebot data [AONB17]. Building emotionally aware embodied agents is a common research direction [CBFV16, SZGK18].…”
Section: Key Challenges Of Gesture Generationmentioning
confidence: 99%
“…Apart from the speaker identity, the emotional or affective state of a speaker also impacts the gestures performed by them. A striking example of this is the large range of expressive motion variation with the same lexical message explored in the Mimebot data [AONB17]. Building emotionally aware embodied agents is a common research direction [CBFV16, SZGK18].…”
Section: Key Challenges Of Gesture Generationmentioning
confidence: 99%
“…The frustrated sound file was 9 seconds long, the relaxed sound file was 6.5 seconds and the two joyful sound files were 10 versus 9 seconds. The two joyful sounds were produced by two gestures that differed in terms of variation of the non-verbal expression along a joyful axis, as described in [1]. Two versions were included for comparative purposes.…”
Section: Stimulimentioning
confidence: 99%
“…Two versions were included for comparative purposes. The gesture that produced the first sound file was rated as more joyful than the gesture producing the second sound (see [1]).…”
Section: Stimulimentioning
confidence: 99%
“…We also demonstrate how our method is used together with data-driven marker reconstruction to provide high quality data from sparse marker sets. For this experiment, we used a subset of the data from a previous study on expressive artificial agents captured in our motion capture lab [26] . The data consist of an 8.4 min (60,315 frames) long motion capture clip of an actor giving instructions to an interlocutor, while varying the displayed level of engagement from very un-engaged to very engaged, as well as two shorter clips of RoM data (one for the hands/fingers and one for the face).…”
Section: Experiments 3: Performance Capturementioning
confidence: 99%