2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.373
|View full text |Cite
|
Sign up to set email alerts
|

Monocular 3D Human Pose Estimation by Predicting Depth on Joints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
66
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 100 publications
(67 citation statements)
references
References 25 publications
1
66
0
Order By: Relevance
“…or resort to an approximate solution [28]. The approach [54] tries to directly regress the absolute depth from the cropped and scaled image regions which is a very ambiguous task. In contrast, our approach does not make any assumptions, nor does it try to solve any ambiguous task.…”
Section: Related Workmentioning
confidence: 99%
“…or resort to an approximate solution [28]. The approach [54] tries to directly regress the absolute depth from the cropped and scaled image regions which is a very ambiguous task. In contrast, our approach does not make any assumptions, nor does it try to solve any ambiguous task.…”
Section: Related Workmentioning
confidence: 99%
“…Fully Supervised: These include approaches such as [26,44,23] that use paired 2D-3D data comprised of ground truth 2D locations of joint landmarks and corresponding 3D ground truth for learning. For example, Martinez et al .…”
Section: Related Workmentioning
confidence: 99%
“…These methods are LinKDE [5], Tekin et al [15], Li et al [36], Zhou et al [14], Zhou et al [4], Du et al [37], Sanzari et al [34], Yasin et al [17], and Bogo et al [24]. Moreover, we compare other competing methods, i.e., Moreno-Noguer et al [38], Tome et al [20], Chen et al [16], Pavlakos et al [19], Zhou et al [23], Bruce et al [51], Tekin et al [50] and our conference version, i.e., Lin et al [21]. For those compared methods (i.e., [4], [5], [15], [16], [17], [24], [34], [36], [37], [38], [51]) whose source codes are not publicly available, we directly obtain their results from their published papers.…”
Section: Comparisons With Existing Methodsmentioning
confidence: 99%
“…Moreover, we compare other competing methods, i.e., Moreno-Noguer et al [38], Tome et al [20], Chen et al [16], Pavlakos et al [19], Zhou et al [23], Bruce et al [51], Tekin et al [50] and our conference version, i.e., Lin et al [21]. For those compared methods (i.e., [4], [5], [15], [16], [17], [24], [34], [36], [37], [38], [51]) whose source codes are not publicly available, we directly obtain their results from their published papers. For the other methods (i.e., [14], [19], [20], [21], [23], [50]), we directly use their official implementations for comparisons.…”
Section: Comparisons With Existing Methodsmentioning
confidence: 99%