2020 IEEE International Conference on Robotics and Automation (ICRA) 2020
DOI: 10.1109/icra40945.2020.9197309
|View full text |Cite
|
Sign up to set email alerts
|

Stillleben: Realistic Scene Synthesis for Deep Learning in Robotics

Abstract: Training data is the key ingredient for deep learning approaches, but difficult to obtain for the specialized domains often encountered in robotics. We describe a synthesis pipeline capable of producing training data for cluttered scene perception tasks such as semantic segmentation, object detection, and correspondence or pose estimation. Our approach arranges object meshes in physically realistic, dense scenes using physics simulation. The arranged scenes are rendered using high-quality rasterization with ra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…2). Our approach is composed of three main components: a renderer [10], a deep CNN, and a shape space. The renderer is in charge of generating 2D images of realistic 3D models which will be used to train the CNN.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…2). Our approach is composed of three main components: a renderer [10], a deep CNN, and a shape space. The renderer is in charge of generating 2D images of realistic 3D models which will be used to train the CNN.…”
Section: Methodsmentioning
confidence: 99%
“…The use of synthetic data to generate training samples has been widely adopted in the deep learning community for object detection, semantic segmentation and pose estimation tasks [10][11][12][13]. One of the first successful attempts was proposed by Tobin et al [14], who trained an object detection network using only synthetic data and were able to transfer the network to real-world applications.…”
Section: Related Work a Rendering For Deep Learningmentioning
confidence: 99%
“…While these methods show great results, it requires domain knowledge to find good parameters. With regards to creating photo-realistic images, there are also lots of research efforts [7], [16], [17], [18]. These sources prove that the qualities of the object model, such as textures/materials, and the renderers, including shadowing/lighting, are all important for sim2real.…”
Section: Related Work a Learning With Synthetic Datamentioning
confidence: 99%
“…8) converts our parametrized world model into 3D point clouds that are suitable for point-to-point registration with the measurements P C s of the Velodyne 3D LiDAR, which is moved to capture a dense 3D scan of the pile or brick scene. We render the parametrized world model using an OpenGL-based renderer (Schwarz and Behnke, 2020) and obtain the point cloud P C m . Both point clouds are represented in the base-link B.…”
Section: Rendering and Samplingmentioning
confidence: 99%