2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00170
|View full text |Cite
|
Sign up to set email alerts
|

Single-view robot pose and joint angle estimation via render & compare

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(60 citation statements)
references
References 47 publications
0
60
0
Order By: Relevance
“…Modeling the pose and shape of 3D objects from monocular images is one of the longest standing objectives of computer vision [42,52]. Recent methods train deep neural network models to compute the object shape [11,16,36,55,70] and pose [31,32,34,68] directly from image pixels. Learned object shape reconstruction from single view images has initially focused on point-cloud [48], mesh [16,66] and voxel [12,51] representations.…”
Section: Related Workmentioning
confidence: 99%
“…Modeling the pose and shape of 3D objects from monocular images is one of the longest standing objectives of computer vision [42,52]. Recent methods train deep neural network models to compute the object shape [11,16,36,55,70] and pose [31,32,34,68] directly from image pixels. Learned object shape reconstruction from single view images has initially focused on point-cloud [48], mesh [16,66] and voxel [12,51] representations.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of depth maps, the large majority of works focus on RGB images. Labbe et al [17] proposed a method that, given a single RGB image of a known articulated robot, estimates the 6D camera-to-robot pose in terms of rigid translation and rotation through a render-and-compare approach. A reference point and an anchor part are needed to perform the estimation, and their choice significantly affects the performance.…”
Section: Related Work a Robot Pose Estimation (Rpe)mentioning
confidence: 99%
“…Data augmentation is a huge field (e.g., Shorten and Khoshgoftaar (2019)) with various techniques and in-depth discussion of each of these techniques is out of scope of this paper. Nevertheless, we can say that the techniques such as background augmentation, adding noise or cropping/transforming images, are common means to increasing the data variation in the source domain (Lambrecht and Kästner (2019); Gulde et al (2019); Lee et al (2020); Labbe et al (2021)). The model is then trained under more varied conditions which helps improve generalization and break the dependence on annotated data from the target domain.…”
Section: Domain Augmentationmentioning
confidence: 99%