2014
DOI: 10.1145/2638549
|View full text |Cite
|
Sign up to set email alerts
|

Driving High-Resolution Facial Scans with Video Performance Capture

Abstract: We present a process for rendering a realistic facial performance with control of viewpoint and illumination. The performance is based on one or more high-quality geometry and reflectance scans of an actor in static poses, driven by one or more video streams of a performance. We compute optical flow correspondences between neighboring video frames, and a sparse set of correspondences between static scans and video frames. The latter are made possible by leveraging the relightability of the static 3D scans to m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
53
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 73 publications
(53 citation statements)
references
References 25 publications
0
53
0
Order By: Relevance
“…The advantage of hand animation is that the artist can precisely style and time the animation, but it is extremely costly and time consuming to produce. The main alternative to hand animation is performance-driven animation using facial motion capture of an actor's face [Beeler et al 2011;Cao et al 2015Cao et al , 2013Fyfe et al 2014;Huang et al 2011;Li et al 2013;Weise et al 2011;Weng et al 2014;Zhang et al 2004]. Performance-driven animation requires an actor to perform all shots, and may generate animation parameters that are complex and time consuming for an animator to edit (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…The advantage of hand animation is that the artist can precisely style and time the animation, but it is extremely costly and time consuming to produce. The main alternative to hand animation is performance-driven animation using facial motion capture of an actor's face [Beeler et al 2011;Cao et al 2015Cao et al , 2013Fyfe et al 2014;Huang et al 2011;Li et al 2013;Weise et al 2011;Weng et al 2014;Zhang et al 2004]. Performance-driven animation requires an actor to perform all shots, and may generate animation parameters that are complex and time consuming for an animator to edit (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…None of these approaches learns a generative wrinkle formation model from video. Generative models of face wrinkle formation were learned from high-quality expressions (out of a vast set of examples) captured with a dense sensor array [Bermano et al 2014;Cao et al 2015] or with depth cameras , or also by interpolating dense high-quality scans in a video-driven way [Fyffe et al 2014]. In contrast, our approach learns such a model from monocular RGB video alone.…”
Section: Introductionmentioning
confidence: 99%
“…Note also that our approach is fully automatic and requires no manual intervention during model creation or tracking, as required in and Bouaziz et al [2013]. Our method needs no additional input other than a face video, meaning no specific sequence of face expressions [Ichim et al 2015;Weise et al 2011], no densely captured static face geometry [Fyffe et al 2014;Valgaerts et al 2012;Ichim et al 2015], and no face detail regression model learned off-line [Cao et al 2015].…”
Section: Introductionmentioning
confidence: 99%
“…In early version of Light Stage, they focused on static face. The latest version of the Light Stage covers the emotion and expression variations by employing high speed cameras and lighting controllers (39), (9). Their system has been utilized in various movie special effects like a Spiderman 2/3, King Kong, Superman Returns, Hancock and Avatar.…”
Section: Measurement Based Face Modelingmentioning
confidence: 99%