1996
DOI: 10.1207/s15516709cog2001_1
|View full text |Cite
|
Sign up to set email alerts
|

Generating Facial Expressions for Speech

Abstract: This paper reports results from a program that produces high quality animation of facial expressions and head movements as automatically as possible in conjunction with meaning-based speech synthesis, including spoken intonation. The goal of the research is as much to test and define our theories of the formal semantics for such gestures, as to produce convincing animation. Towards this end we have produced a high level programming language for 3D animation of facial expressions. We have been concerned primari… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
91
0

Year Published

1998
1998
2019
2019

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 180 publications
(91 citation statements)
references
References 30 publications
0
91
0
Order By: Relevance
“…Classic work on virtual humans in the computer graphics community focuses on perception and action in 3D worlds (Badler, Phillips, & Webber, 1993;Thalmann, 1993), but largely ignores dialogue and emotions. Several systems have carefully modeled the interplay between speech and nonverbal behavior in face-to-face dialogue (Cassell, Bickmore, Campbell, Vilhjálmsson, & Yan, 2000;Pelachaud, Badler, & Steedman, 1996) but these virtual humans do not include emotions and can not participate in physical tasks in 3D worlds. Some work has begun to explore the integration of conversational capabilities with emotions (Lester, Towns, Callaway, Voerman, & FitzGerald, 2000;Marsella, Johnson, & LaBore, 2000;Poggi & Pelachaud, 2000), but still does not address physical tasks in 3D worlds.…”
Section: An Integration Challengementioning
confidence: 99%
“…Classic work on virtual humans in the computer graphics community focuses on perception and action in 3D worlds (Badler, Phillips, & Webber, 1993;Thalmann, 1993), but largely ignores dialogue and emotions. Several systems have carefully modeled the interplay between speech and nonverbal behavior in face-to-face dialogue (Cassell, Bickmore, Campbell, Vilhjálmsson, & Yan, 2000;Pelachaud, Badler, & Steedman, 1996) but these virtual humans do not include emotions and can not participate in physical tasks in 3D worlds. Some work has begun to explore the integration of conversational capabilities with emotions (Lester, Towns, Callaway, Voerman, & FitzGerald, 2000;Marsella, Johnson, & LaBore, 2000;Poggi & Pelachaud, 2000), but still does not address physical tasks in 3D worlds.…”
Section: An Integration Challengementioning
confidence: 99%
“…In textdriven facial animation, the process generally involves determining a mapping from text (orthographic or phonetic) onto visemes by means of vector quantization [10], [11] or a rulebased system [12], [13]. Facial animation driven by speech can be approached in a similar fashion by deriving the phoneme sequence directly from the speech signal, as is done in speech recognition [14]- [16].…”
Section: Related Workmentioning
confidence: 99%
“…Each is represented by two parameters: its time of occurrence and its type. Our algorithm [53] embodies rules to automatically generate facial expressions, following the principle of synchrony. The program scans the input utterances and computes the different facial expressions corresponding to these functional groups.…”
Section: Facial Expression Generationmentioning
confidence: 99%
“…The program scans the input utterances and computes the different facial expressions corresponding to these functional groups. The computation of the lip shape is done in three passes and incorporates coarticulation effects [53]. Phonemes are characterized by their degree of deformability.…”
Section: Facial Expression Generationmentioning
confidence: 99%