International Cconference on Multimodal Interaction 2023
DOI: 10.1145/3610661.3616547
|View full text |Cite
|
Sign up to set email alerts
|

Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent

Alice Delbosc,
Magalie Ochs,
Nicolas Sabouret
et al.

Abstract: This paper introduces a new model to generate rhythmically relevant non-verbal facial behaviors for virtual agents while they speak. The model demonstrates perceived performance comparable to behaviors directly extracted from the data and replayed on a virtual agent, in terms of synchronization with speech and believability. Interestingly, we found that training the model with two different sets of data, instead of one, did not necessarily improve its performance. The expressiveness of the people in the datase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 73 publications
(88 reference statements)
0
1
0
Order By: Relevance
“…It relies on the frequency components of the movement's speed profile (i.e., changes of speed over time), represented using the Fourier magnitude spectrum. Closely related to this motion invariant but sharing the data‐driven comparison, let us also mention the study of Delbosc et al [DOS*23] which aimed at generating synchronized and believable facial non‐verbal animations for conversational VH. Authors proposed to evaluate their resulting animation in comparison to ground‐truth data both using a distance metrics based on DTW but also comparing jerk results as a good indicator of motion naturalness.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…It relies on the frequency components of the movement's speed profile (i.e., changes of speed over time), represented using the Fourier magnitude spectrum. Closely related to this motion invariant but sharing the data‐driven comparison, let us also mention the study of Delbosc et al [DOS*23] which aimed at generating synchronized and believable facial non‐verbal animations for conversational VH. Authors proposed to evaluate their resulting animation in comparison to ground‐truth data both using a distance metrics based on DTW but also comparing jerk results as a good indicator of motion naturalness.…”
Section: Evaluation Methodsmentioning
confidence: 99%