2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) 2019
DOI: 10.1109/etfa.2019.8868243
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Predict Robot Keypoints Using Artificially Generated Images

Abstract: This work considers robot keypoint estimation on color images as a supervised machine learning task. We propose the use of probabilistically created renderings to overcome the lack of labeled real images. Rather than sampling from stationary distributions, our approach introduces a feedback mechanism that constantly adapts probability distributions according to current training progress. Initial results show, our approach achieves near-human-level accuracy on real images. Additionally, we demonstrate that feed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 7 publications
0
12
0
Order By: Relevance
“…Given the high demand for large anno-tated datasets for deep learning, there has been an increase in both synthetic datasets [76,65,45,25,92,62,20] and in tools for generating such data [37,74,12,15,85]. The Cycles renderer included in Blender has been widely used in the research community for generating synthetic data because of its ray tracing ability [24,49,30,66,83,64,26]. In an attempt to more easily generate synthetic images, Denninger et al [15] introduced an extension to Blender that renders objects falling onto a plane with randomized camera poses.…”
Section: Related Workmentioning
confidence: 99%
“…Given the high demand for large anno-tated datasets for deep learning, there has been an increase in both synthetic datasets [76,65,45,25,92,62,20] and in tools for generating such data [37,74,12,15,85]. The Cycles renderer included in Blender has been widely used in the research community for generating synthetic data because of its ray tracing ability [24,49,30,66,83,64,26]. In an attempt to more easily generate synthetic images, Denninger et al [15] introduced an extension to Blender that renders objects falling onto a plane with randomized camera poses.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Iraci (2013) used Cycles to render transparent objects in order to train a computer vision network to detect the point cloud (Sajjan et al, 2019). Blender and Cycles have also been used to generate data sets of head poses (Gu et al, 2017), eye poses (Wood et al, 2015), different kind of objects (Ron & Elbaz, 2020), robot poses (Heindl et al, 2019), and so on. In an attempt to more easily generate synthetic images, Denninger et al (2019) introduced an extension to Blender that render falling objects onto a plane with randomized camera poses.…”
Section: Related Workmentioning
confidence: 99%
“…Their zero-shot model produced adequate results, they needed to incorporate real-world data for better performance, however. The authors in [18] exploited the randomisation of both physical and visual properties to train a dexterous robotic hand to perform an in-hand manipulation, where they further improved their work in [24] by proposing an iterative approach to learn the randomisation parameters' distributions, the so-called Automatic Domain Randomisation [25]- [27].…”
Section: B Domain Randomisationmentioning
confidence: 99%