2020
DOI: 10.1007/978-3-030-58592-1_6
|View full text |Cite
|
Sign up to set email alerts
|

Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation Under Hand-Object Interaction

Abstract: We study how well different types of approaches generalise in the task of 3D hand pose estimation under single hand scenarios and handobject interaction. We show that the accuracy of state-of-the-art methods can drop, and that they fail mostly on poses absent from the training set. Unfortunately, since the space of hand poses is highly dimensional, it is inherently not feasible to cover the whole space densely, despite recent efforts in collecting large-scale training datasets. This sampling problem is even mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 47 publications
(36 citation statements)
references
References 33 publications
1
35
0
Order By: Relevance
“…Our approach is based on machine learning techniques, namely deep learning. We adopt the approach introduced in [37] as the Voxel-to-Voxel PoseNet and present an ablation study on a recent benchmark dataset HANDS19 Challenge: Task 2 -Depth-Based 3D Hand Pose Estimation while Interacting with Objects [31]. We believe this dataset to be the most challenging depth based egocentric view hand pose estimation dataset available to this date.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Our approach is based on machine learning techniques, namely deep learning. We adopt the approach introduced in [37] as the Voxel-to-Voxel PoseNet and present an ablation study on a recent benchmark dataset HANDS19 Challenge: Task 2 -Depth-Based 3D Hand Pose Estimation while Interacting with Objects [31]. We believe this dataset to be the most challenging depth based egocentric view hand pose estimation dataset available to this date.…”
Section: Methodsmentioning
confidence: 99%
“…We train and evaluate our system on the data provided in the HANDS19 Challenge: Task 2 -Depth-Based 3D Hand Pose Estimation while Interacting with Objects [31], [65]. This task builds on the F-PHAB dataset [4], where objects are being manipulated by a subject in an egocentric viewpoint, see Figure 4.…”
Section: A Datamentioning
confidence: 99%
See 3 more Smart Citations