2023
DOI: 10.1016/j.aei.2023.101875
|View full text |Cite
|
Sign up to set email alerts
|

Excavator 3D pose estimation using deep learning and hybrid datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(2 citation statements)
references
References 73 publications
0
2
0
Order By: Relevance
“…Similarly, Liu et al [21] examined a computationally effective tracking approach from a 3D human skeleton extracted from stereo videos to observe the situation awareness of individuals to prevent accidents in construction. Assadzadeh et al used YOLOv5 to detect the excavator and HRNet to perform a 2D pose estimation within the predicted bounding box [65]. Wen et al utilized a modified Keypoint R-CNN algorithm to extract the 2D pose of an excavator from video frames [66].…”
Section: Pose Estimationmentioning
confidence: 99%
“…Similarly, Liu et al [21] examined a computationally effective tracking approach from a 3D human skeleton extracted from stereo videos to observe the situation awareness of individuals to prevent accidents in construction. Assadzadeh et al used YOLOv5 to detect the excavator and HRNet to perform a 2D pose estimation within the predicted bounding box [65]. Wen et al utilized a modified Keypoint R-CNN algorithm to extract the 2D pose of an excavator from video frames [66].…”
Section: Pose Estimationmentioning
confidence: 99%
“…[1][2][3][4][5][6]. For the special environment of on-orbit work, the pose estimation [7,8] algorithm based on a lowquality, low-power monocular sensor provides a feasible scheme for space application, and it has received extensive attention from scientific research institutions and researchers. Some institutions [9][10][11][12] have carried out relevant studies and semi-physical simulation experiments on the pose estimation of non-cooperative space objects by using monocular vision cameras.…”
Section: Introductionmentioning
confidence: 99%