2020
DOI: 10.1007/978-3-030-58621-8_4
|View full text |Cite
|
Sign up to set email alerts
|

HTML: A Parametric Hand Texture Model for 3D Hand Reconstruction and Personalization

Abstract: 3D hand reconstruction from images is a widely-studied problem in computer vision and graphics, and has a particularly high relevance for virtual and augmented reality. Although several 3D hand reconstruction approaches leverage hand models as a strong prior to resolve ambiguities and achieve more robust results, most existing models account only for the hand shape and poses and do not model the texture. To fill this gap, in this work we present HTML, the first parametric texture model of human hands. Our mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(49 citation statements)
references
References 56 publications
(79 reference statements)
0
49
0
Order By: Relevance
“…Skin Tone & Textures. We adopt a state-of-the-art hand skin tone & texture model: HTML [35] for realistic appearance on the rendered images. HTML represents the hand's skin color & texture of as continuous parameters in a PCA space.…”
Section: Online Synthesis For Hope Taskmentioning
confidence: 99%
“…Skin Tone & Textures. We adopt a state-of-the-art hand skin tone & texture model: HTML [35] for realistic appearance on the rendered images. HTML represents the hand's skin color & texture of as continuous parameters in a PCA space.…”
Section: Online Synthesis For Hope Taskmentioning
confidence: 99%
“…The PCA technique is combined to simplify the model. Qian et al [36] consummate MANO by augmenting it with a parametric texture model.…”
Section: Hand Parametric Modelsmentioning
confidence: 99%
“…Zhang et al [53] and Boukhayma et al [4], respectively, propose an end-to-end neural network to predict motion parameters with 2D image as input. Qian et al [36] leverage the network in [4] to obtain the motion parameters of MANO and further to refine the mesh model by photometric loss. Other deep learning-based approaches either leverage depth information [29,32] or combine image and depth information together.…”
Section: Hand Pose Reconstructionmentioning
confidence: 99%
“…We would like to thank the reviewers for their feedback, Jiayi Wang for the hand texture [24], Jessica Illera for the help with the study, and other members of the MSLab at URJC for their support. This work was funded in part by the European Research Council (ERC Consolidator Grant no.772738 TouchDesign) and the Spanish Ministry of Science (RTI2018-098694-B-I00 VizLearning; PID2019-105579RB-I00/AEI/10.13039/501100011033).…”
Section: Acknowledgmentsmentioning
confidence: 99%