SIGGRAPH Asia 2020 Posters 2020
DOI: 10.1145/3415264.3425453
|View full text |Cite
|
Sign up to set email alerts
|

Free-Viewpoint Facial Re-Enactment from a Casual Capture

Abstract: Figure 1: We capture a video around a target subject (the Egyptian bust) and we re-enact the target's face in novel viewpoints. Our re-enactment is driven by an expression sequence of a source subject captured using a custom app running on an iPhone.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…Our approach allows true freeviewpoint navigation while reasonably preserving identiy. Rao et al [2020] shares some similarities with our approach, since they also process facial animations in a canonical frame, and lift the face to 3D. However, they use StyleGAN to synthesize only the mouth, and require dense capture.…”
Section: Face Models and Portrait Renderingmentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach allows true freeviewpoint navigation while reasonably preserving identiy. Rao et al [2020] shares some similarities with our approach, since they also process facial animations in a canonical frame, and lift the face to 3D. However, they use StyleGAN to synthesize only the mouth, and require dense capture.…”
Section: Face Models and Portrait Renderingmentioning
confidence: 99%
“…Since our camera manifold assumes the eyes to be at defined 3D positions p 𝑙 and p 𝑟 , we perform 3D alignment to the canonical configuration (Eq. 1) [Gao et al 2020;Rao et al 2020], making our method independent of the reconstruction algorithm.…”
Section: Alignment To Canonical Coordinatesmentioning
confidence: 99%
“…Since our camera manifold assumes the eyes to be at defined 3D positions p 𝑙 and p 𝑟 , we perform 3D alignment to the canonical configuration (Eq. 1) [Gao et al 2020;Rao et al 2020], making our method independent of the reconstruction algorithm.…”
Section: Alignment To Canonical Coordinatesmentioning
confidence: 99%