2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00088
|View full text |Cite
|
Sign up to set email alerts
|

A2J: Anchor-to-Joint Regression Network for 3D Articulated Pose Estimation From a Single Depth Image

Abstract: For 3D hand and body pose estimation task in depth image, a novel anchor-based approach termed Anchor-to-Joint regression network (A2J) with the end-to-end learning ability is proposed. Within A2J, anchor points able to capture global-local spatial context information are densely set on depth image as local regressors for the joints. They contribute to predict the positions of the joints in ensemble way to enhance generalization ability. The proposed 3D articulated pose estimation paradigm is different from th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
144
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 164 publications
(145 citation statements)
references
References 45 publications
1
144
0
Order By: Relevance
“…We first compare our method with others on HANDS 2017 dataset . Since the HANDS 2017 dataset does not provide test set labels publicly, we evaluate using only mean joint error metric and compare our method with Vanora , THU VCLab (Chen et al 2018a), oasis (Moon, Chang, and Lee 2018a), RCN-3D (Yuan et al 2018), V2V-PoseNet (Moon, Chang, and Lee 2018b) and A2J (Xiong et al 2019). Results in Table 5 reflect that our ResNet18 based method already exceeds previous state-of-the-art methods by a large margin.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…We first compare our method with others on HANDS 2017 dataset . Since the HANDS 2017 dataset does not provide test set labels publicly, we evaluate using only mean joint error metric and compare our method with Vanora , THU VCLab (Chen et al 2018a), oasis (Moon, Chang, and Lee 2018a), RCN-3D (Yuan et al 2018), V2V-PoseNet (Moon, Chang, and Lee 2018b) and A2J (Xiong et al 2019). Results in Table 5 reflect that our ResNet18 based method already exceeds previous state-of-the-art methods by a large margin.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Recent work (Xiong et al 2019) uses 2D offsets between anchor points and hand joints to represent 2D positions of joints. Due to the large variance of offsets, we further decompose them into 2D directional unit vector fields and closeness heatmaps, reflecting 2D directions and closeness from each pixel in depth images to target joints.…”
Section: Comprehensive Explorationsmentioning
confidence: 99%
See 3 more Smart Citations