2009 IEEE Conference on Computer Vision and Pattern Recognition 2009
DOI: 10.1109/cvpr.2009.5206868
|View full text |Cite
|
Sign up to set email alerts
|

Dense 3D motion capture for human faces

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 38 publications
(8 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…Explicit face markers significantly simplify tracking, but also limit the amount of spatial detail that can be captured. Performance capture based on dense 3D acquisition, such as structured light scanners [Zhang et al 2004;Weise et al 2009] or multi-view camera systems [Furukawa and Ponce 2009;Bradley et al 2010;Beeler et al 2011;Valgaerts et al 2012], have been developed more recently and proven efficient to capture finescale dynamics. Processing times can be significant, however, often impeding interactive framerates.…”
Section: Related Workmentioning
confidence: 99%
“…Explicit face markers significantly simplify tracking, but also limit the amount of spatial detail that can be captured. Performance capture based on dense 3D acquisition, such as structured light scanners [Zhang et al 2004;Weise et al 2009] or multi-view camera systems [Furukawa and Ponce 2009;Bradley et al 2010;Beeler et al 2011;Valgaerts et al 2012], have been developed more recently and proven efficient to capture finescale dynamics. Processing times can be significant, however, often impeding interactive framerates.…”
Section: Related Workmentioning
confidence: 99%
“…Modeling photorealistic human faces is a long standing problem in computer graphics and vision. Early works leverage multi-view capture systems to obtain high-fidelity human faces [2,4,5,13,14,24,52,79]. While these approaches provide accurate facial reflectance and geometry, photorealistic rendering requires significant manual effort [54] and typically not real-time with physicsbased rendering.…”
Section: Related Workmentioning
confidence: 99%
“…All these methods involve highly specialized sensors and/or controlled studio environments [Zhang and Huang 2004;Borshukov et al 2005;Ma et al 2007;Beeler et al 2010;Bradley et al 2010]. High-resolution facial motion is generally recovered through variants of non-rigid registration and tracking algorithms across sequences of input geometry, texture, or both Furukawa and Ponce 2009;Alexander et al 2009;Li et al 2009;Weise et al 2009;Bradley et al 2010;Wilson et al 2010]. With a focus on precision, these systems are not designed to achieve interactive performance in general environments, a crucial requirement for the type of consumer-level applications targeted by our work.…”
Section: Related Workmentioning
confidence: 99%