2021 IEEE International Conference on Multimedia and Expo (ICME) 2021
DOI: 10.1109/icme51207.2021.9428299
|View full text |Cite
|
Sign up to set email alerts
|

Capturing Implicit Spatial Cues for Monocular 3d Hand Reconstruction

Abstract: With the development of the parameterized hand model (e.g. MANO), it is possible to reconstruct the 3D hand mesh from a single 2D hand image by learning a few hand model parameters, rather than estimating hundreds of vertices on the mesh. However, it is highly non-linear to learn these parameters from the 2D hand image, as there is no explicit spatial correspondence between these parameters and image pixels. In this paper, we successfully leverage the graph convolutional network (GCN) to capture implicit spati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 23 publications
(75 reference statements)
0
4
0
Order By: Relevance
“…Our method outperforms the 10 recent baselines under all the six evaluation metrics. The superior performance of our method and MANO-GCN [24] over the CNN-based baselines demonstrates that a GCN is suitable for modeling both sparse hand joints and dense mesh vertices. MANO-GCN also adopts a two-branch architecture, but it is inferior to our method under all the six evaluation metrics.…”
Section: Comparison Between Our Methods and Baselines On The Freihand...mentioning
confidence: 93%
See 3 more Smart Citations
“…Our method outperforms the 10 recent baselines under all the six evaluation metrics. The superior performance of our method and MANO-GCN [24] over the CNN-based baselines demonstrates that a GCN is suitable for modeling both sparse hand joints and dense mesh vertices. MANO-GCN also adopts a two-branch architecture, but it is inferior to our method under all the six evaluation metrics.…”
Section: Comparison Between Our Methods and Baselines On The Freihand...mentioning
confidence: 93%
“…Hand shapes were regressed from the latent features by a Chebyshev spectral GCN. Q. Wu et al [24] extracted latent features by HRNet [20]. Hand poses and shapes were estimated by combining a differentiable MANO layer and a 3-layer Chebyshev spectral GCN.…”
Section: Gcn For Hand Shape Modelingmentioning
confidence: 99%
See 2 more Smart Citations