Proceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology 2008
DOI: 10.1145/1450579.1450621
|View full text |Cite
|
Sign up to set email alerts
|

Feature points based facial animation retargeting

Abstract: . Figure 1: The bottom row shows the virtual face animated by retargeting expressions from the source face (top row). AbstractWe present a method for transferring facial animation in real-time. The source animation may be an existing 3D animation or 2D data providing by a video tracker or a motion capture system. Based on two sets of feature points manually selected on the source and target faces (the only manual work required), a RBF network is trained and provides a geometric transformation between the two f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(21 citation statements)
references
References 15 publications
0
21
0
Order By: Relevance
“…Motion that occurs in 3D face models is scaled According to the relative scale between the source and the target [4] [6]. Marker points of the source face define the source space while the feature points of the targets face defining the space targets [7] [8].…”
Section: Art Directionmentioning
confidence: 99%
“…Motion that occurs in 3D face models is scaled According to the relative scale between the source and the target [4] [6]. Marker points of the source face define the source space while the feature points of the targets face defining the space targets [7] [8].…”
Section: Art Directionmentioning
confidence: 99%
“…Besides, we interpolated new landmarks to avoid irregular deformation. In the existing work of point-driven facial animation, RBF network [24], [25] is trained to map the source points to the template mesh so as to maintain the topology of template. However, in our case, it is beneficial to keep the structure of target face, hence, we only perform rigid alignment instead.…”
Section: A Point-driven Template Deformationmentioning
confidence: 99%
“…Our approach towards bringing together large multimodal datasets and shortterm user interaction is to use machine learning. There is already a very large body of literature on applying machine learning techniques in the fields of gesture or voice recognition [9], gesture or voice synthesis [10], gesture or voice conversion [11], and what is called implicit mapping, i.e. the use of statistical layers as a way of connecting inputs to outputs in interactive systems [12].…”
Section: Performative Control and Machine Learningmentioning
confidence: 99%