2012
DOI: 10.1145/2159516.2159519
|View full text |Cite
|
Sign up to set email alerts
|

Spacetime expression cloning for blendshapes

Abstract: The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(49 citation statements)
references
References 52 publications
0
49
0
Order By: Relevance
“…Moreover, we extend our approach by combing expression cloning technique, overcoming the limitation in [17] that only face model having the priori can be edited. Recently, spacetime editing methods [15,[18][19][20][21] that can propagate the editing on a single face frame across the whole sequence have been explored. For example, [19] built a Poisson equation to propagate user's modifications at any frames to the entire sequence.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, we extend our approach by combing expression cloning technique, overcoming the limitation in [17] that only face model having the priori can be edited. Recently, spacetime editing methods [15,[18][19][20][21] that can propagate the editing on a single face frame across the whole sequence have been explored. For example, [19] built a Poisson equation to propagate user's modifications at any frames to the entire sequence.…”
Section: Related Workmentioning
confidence: 99%
“…None of them maps directly to a control rig, ignoring the need of a post-production editing phase. Seol et al [24] specifically address the problem of providing room for additional editing by an animator. But again, the target of their retargeting method is a set of blendshapes, which, for animators, is harder to use with respect to a control rig.…”
Section: Online Puppetry and Performance Capturementioning
confidence: 99%
“…Our idea of combining facial retargeting with editing is inspired by recent success on facial expression editing, including both spatial facial editing [Joshi et al 2003;Zhang et al 2004;Lau et al 2007;Meyer and Anderson 2007;Bickel et al 2008;Lau et al 2009;Lewis and Anjyo 2010;Tena et al 2011] and spatial-temporal expression editing [Li and Deng 2008;Seol et al 2012;Akhter et al 2012]. Recent research on spatial facial editing has been focused on utilizing various forms of data-driven models to constrain the solution space of facial editing.…”
Section: Facial Animation Editingmentioning
confidence: 99%
“…Recent research on spatial facial editing has been focused on utilizing various forms of data-driven models to constrain the solution space of facial editing. For example, Zhang and col- Our work is most closely related to the work of [Seol et al 2012] because both allow for correction and adjustment of the retargeted motion in the spatial-temporal domain. Our work, however, differs from theirs in several ways.…”
Section: Facial Animation Editingmentioning
confidence: 99%