2014
DOI: 10.1145/2661229.2661290
|View full text |Cite
|
Sign up to set email alerts
|

Automatic acquisition of high-fidelity facial performances using monocular videos

Abstract: Figure 1: Our system automatically captures high-fidelity facial performances using Internet videos: (left) input video data; (middle) the captured facial performances; (right) facial editing results: wrinkle removal and facial geometry editing. AbstractThis paper presents a facial performance capture system that automatically captures high-fidelity facial performances using uncontrolled monocular videos (e.g., Internet videos). We start the process by detecting and tracking important facial features such as t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
104
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 126 publications
(106 citation statements)
references
References 44 publications
2
104
0
Order By: Relevance
“…Recently, 3D face priors, dense flow, and shape from shading methods have been introduced to achieve higher fidelity 3D tracking from unconstrained monocular videos [Garrido et al 2013;Shi et al 2014]. Using dimension-reduced linear models and sufficient prior facial data, a single camera can be used to generate compelling facial animation in real-time without any calibration [Cao et al 2014].…”
Section: Previous Workmentioning
confidence: 99%
“…Recently, 3D face priors, dense flow, and shape from shading methods have been introduced to achieve higher fidelity 3D tracking from unconstrained monocular videos [Garrido et al 2013;Shi et al 2014]. Using dimension-reduced linear models and sufficient prior facial data, a single camera can be used to generate compelling facial animation in real-time without any calibration [Cao et al 2014].…”
Section: Previous Workmentioning
confidence: 99%
“…In terms of 3D facial geometry reconstruction for the refinement of landmarks, recently there has been an increasing amount of research based on 2D images and videos [19,[35][36][37][38][39][40][41]. In order to accurately track facial landmarks, it is important to first reconstruct face geometry.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For example, methods such as those in Refs. [19,37,40] can reconstruct details such as wrinkles, and track subtle facial movements, but are affected by shadows and occlusions. Robust methods such as Refs.…”
Section: Literature Reviewmentioning
confidence: 99%
“…While the methods above are able to infer detailed geometry we aim for the creation of an avatar of the recorded user, that can be animated programmatically or using other sources of tracking parameters. The systems of [Garrido et al 2013], and [Shi et al 2014] essentially recover detailed facial geometry by providing one mesh per frame deformed to match the input data. The former uses a pre-built user-specific blendshape model for the face alignment by employing automatically corrected feature points [Saragih et al 2011].…”
Section: Dynamic Modelingmentioning
confidence: 99%
“…Although our tracking approach and detail enhancement is based on similar principles, the aim of our approach is to integrate all these shape corrections directly into our proposed two-scale representation of dynamic 3D faces. Shi et al [2014] use their own feature detector along with a non-rigid structure-from-motion algorithm to track and model the identity and per-frame expressions of the face by employing a bilinear face model. Additionally, a keyframe-based iterative approach using shape from shading is employed in order to further refine the bilinear model parameters, as well as the albedo texture of the face, and per-frame normal maps exhibiting high frequency details such as wrinkles Neither method aims at creating an animation-ready avatar that incorporates all of the extracted details.…”
Section: Dynamic Modelingmentioning
confidence: 99%