2020
DOI: 10.48550/arxiv.2010.08221
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HPERL: 3D Human Pose Estimation from RGB and LiDAR

Abstract: In-the-wild human pose estimation has a huge potential for various fields, ranging from animation and action recognition to intention recognition and prediction for autonomous driving. The current state-of-the-art is focused only on RGB and RGB-D approaches for predicting the 3D human pose. However, not using precise LiDAR depth information limits the performance and leads to very inaccurate absolute pose estimation. With LiDAR sensors becoming more affordable and common on robots and autonomous vehicle setups… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…It is often performed using RGB and RGB-D data [13], [14]. In [15], the authors propose a multi-modal system using RGB imagary and lidar scans to obtain a precise 3D pose estimation. However, these methods require large CNNs and large amounts of data to achieve good results, since they use high-dimension inputs such as high-resolution images and dense point clouds to predict body poses.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It is often performed using RGB and RGB-D data [13], [14]. In [15], the authors propose a multi-modal system using RGB imagary and lidar scans to obtain a precise 3D pose estimation. However, these methods require large CNNs and large amounts of data to achieve good results, since they use high-dimension inputs such as high-resolution images and dense point clouds to predict body poses.…”
Section: Related Workmentioning
confidence: 99%
“…5, and each one of them is mapped to a specific Fig. 5: Static (1-12) and dynamic (13)(14)(15)(16)(17)(18) gestures learned by the classifier (Images from [28]). command.…”
Section: Gesture-based Vehicle Teleoperationmentioning
confidence: 99%