2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7532378
|View full text |Cite
|
Sign up to set email alerts
|

Real-time avatar animation with dynamic face texturing

Abstract: In this paper, we present a system to capture and animate a highly realistic avatar model of a user in real-time. The animated human model consists of a rigged 3D mesh and a texture map. The system is based on KinectV2 input which captures the skeleton of the current pose of the subject in order to animate the human shape model. An additional high-resolution RGB camera is used to capture the face for updating the texture map on each frame. With this combination of image based rendering with computer graphics w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 23 publications
0
12
0
Order By: Relevance
“…Textured animations can be achieved in real‐time [FPHE16, FPE14]. The model is animated with Kinect using retargeting to map the skeleton parameters.…”
Section: Evaluation Discussion and Applicationsmentioning
confidence: 99%
“…Textured animations can be achieved in real‐time [FPHE16, FPE14]. The model is animated with Kinect using retargeting to map the skeleton parameters.…”
Section: Evaluation Discussion and Applicationsmentioning
confidence: 99%
“…oral cavity) or simply ignore them. Other works like [Borshukov et al 2006;Carranza et al 2003;Casas et al 2014;Dale et al 2011;Fechteler et al 2014Fechteler et al , 2016Kilner et al 2006;Lipski et al 2011;Paier et al 2015Paier et al , 2017, try to circumvent these limitations by employing image-based rendering approaches, where the facial performance is captured in geometry and texture space. [Dale et al 2011] present a low cost system for facial performance transfer.…”
Section: Related Work 21 Modelling Of Facial Expressionsmentioning
confidence: 99%
“…Using a multilinear face model, they are able to track an actor's facial geometry in 2D video, which enables them to transfer facial performances between different persons via image-based rendering. Similar techniques were used in [Fechteler et al 2014], [Fechteler et al 2016] or [Thies et al 2015]. Another image based animation strategy was proposed in [Casas et al 2014;Paier et al 2015Paier et al , 2017, where low resolution 3D meshes were combined with highly detailed dynamic textures that allow generating high quality renderings.…”
Section: Related Work 21 Modelling Of Facial Expressionsmentioning
confidence: 99%
“…oral cavity) or simply ignore them. Other works like [2,6,7,11,15,16,25,29,34,35], try to circumvent these limitations by employing image-based rendering approaches, where the facial performance is captured in geometry and texture space. In [11], Dale et al present a low cost system for facial performance transfer.…”
Section: Related Workmentioning
confidence: 99%