The 34th Annual ACM Symposium on User Interface Software and Technology 2021
DOI: 10.1145/3472749.3474789
|View full text |Cite
|
Sign up to set email alerts
|

SGToolkit: An Interactive Gesture Authoring Toolkit for Embodied Conversational Agents

Abstract: Non-verbal behavior is essential for embodied agents like social robots, virtual avatars, and digital humans. Existing behavior authoring approaches including keyframe animation and motion capture are too expensive to use when there are numerous utterances requiring gestures. Automatic generation methods show promising results, but their output quality is not satisfactory yet, and it is hard to modify outputs as a gesture designer wants. We introduce a new gesture generation toolkit, named SGToolkit, which giv… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…The authors employ a probabilistic model that predicts the next pose distribution instead of predicting a fixed pose; gesture motion can then be re‐sampled repeatedly to obtain a variety of sequences. Similarly, in [YPJ*21] a gesture generation toolkit is presented with the control parameters speed, spacial extent, and handedness. The system in [SGD21] uses the Laban Effort and Shape qualities as animation modifiers to impart the intended personality to the character.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors employ a probabilistic model that predicts the next pose distribution instead of predicting a fixed pose; gesture motion can then be re‐sampled repeatedly to obtain a variety of sequences. Similarly, in [YPJ*21] a gesture generation toolkit is presented with the control parameters speed, spacial extent, and handedness. The system in [SGD21] uses the Laban Effort and Shape qualities as animation modifiers to impart the intended personality to the character.…”
Section: Related Workmentioning
confidence: 99%
“…Previous works have sought to address the problem of creating distinct styles by modelling and generating gestures for specific speakers [NKAS08, GBK*19, YCL*20, ALNM20] and by modifying gesture motion through general statistics such as hand height and velocity [AHKB20, YPJ*21]. These approaches lack flexibility because they are limited by the content of the training data.…”
Section: Introductionmentioning
confidence: 99%
“…deictic gestures for a lecturer in front of display) from gesture style differences between speakers [ALNM20]. Yoon et al [YPJ*21] recently proposed an innovative approach to this challenge: an authoring toolkit that balances gesture quality and authoring effort. The toolkit combines automatic gesture generation using a GAN‐based generative model [YCL*20] and manual controls.…”
Section: Key Challenges Of Gesture Generationmentioning
confidence: 99%
“…In addition to the objective evaluation, we conducted a subjective evaluation in that human participants rate the gesture motion videos of a virtual character. We followed the evaluation scheme introduced in GENEA Challenge 2020 [18] and was used in related studies [33], [12]. The evaluation scheme consists of two studies that measure human-likeness of generated motion and appropriateness of motion to the input speech.…”
Section: Subjective Evaluationsmentioning
confidence: 99%