2020
DOI: 10.1002/cav.1944
|View full text |Cite
|
Sign up to set email alerts
|

Automatic text‐to‐gesture rule generation for embodied conversational agents

Abstract: Interactions with embodied conversational agents can be enhanced using human-like co-speech gestures. Traditionally, rule-based co-speech gesture mapping has been utilized for this purpose. However, the creation of this mapping is laborious and often requires human experts. Moreover, human-created mapping tends to be limited, therefore prone to generate repeated gestures. In this article, we present an approach to automate the generation of rule-based co-speech gesture mapping from publicly available large vid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(16 citation statements)
references
References 24 publications
0
16
0
Order By: Relevance
“…We demonstrate realistic interactions between a virtual human and real animal dolls using the silhouette mesh. Our virtual human is equipped with voice chat [3], gesture animation [47] to provide interactive scenarios (see Figure 7). While playing with the dolls, the virtual human will point or approach a doll to draw the user's attention, and occasionally exhibit the behavior of touching the doll plausibly to make the experience more appealing.…”
Section: Case Study: Users Play With Animal Dolls In Mobile Ar Applicationmentioning
confidence: 99%
“…We demonstrate realistic interactions between a virtual human and real animal dolls using the silhouette mesh. Our virtual human is equipped with voice chat [3], gesture animation [47] to provide interactive scenarios (see Figure 7). While playing with the dolls, the virtual human will point or approach a doll to draw the user's attention, and occasionally exhibit the behavior of touching the doll plausibly to make the experience more appealing.…”
Section: Case Study: Users Play With Animal Dolls In Mobile Ar Applicationmentioning
confidence: 99%
“…For example, SimSensei Kiosk [4] demonstrated VA's listening gestures, such as nodding and gazing, during the clinical interview to build rapport with the client. Ali et al [5] focused on improving the naturalness of conversational interactions with hand gestures (which unconsciously express engagement while speaking) and synchronizing it with speech. Another study explored the impact of VA's body size on user experience in augmented reality (AR) [15], [16].…”
Section: Virtual Agents With Engaging Behaviormentioning
confidence: 99%
“…Engaging behavior (E) aims to make the user aware of the VA's active engagement in the group discussion. Our study employed four non-verbal behaviors, which also have been commonly used in prior studies [4], [5], [7] -namely, mutual gaze, directed posture, listening gestures, and speaking gestures (also listed in Table 1). In the group discussion, the VA has two roles: to listen to the other participants (as a listener) and speak to them (as a speaker).…”
Section: Engaging and Non-engaging Behavior (E/ne)mentioning
confidence: 99%
See 2 more Smart Citations