2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461195
|View full text |Cite
|
Sign up to set email alerts
|

Fast Object Learning and Dual-arm Coordination for Cluttered Stowing, Picking, and Packing

Abstract: Robotic picking from cluttered bins is a demanding task, for which Amazon Robotics holds challenges. The 2017 Amazon Robotics Challenge (ARC) required stowing items into a storage system, picking specific items, and packing them into boxes. In this paper, we describe the entry of team NimbRo Picking. Our deep object perception pipeline can be quickly and efficiently adapted to new items using a custom turntable capture system and transfer learning. It produces high-quality item segments, on which grasp poses a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 78 publications
(55 citation statements)
references
References 20 publications
0
55
0
Order By: Relevance
“…We apply our object segmentation approach to RGB images from the Kinect V2 [20]. This approach is able to produce pixel-(or point-)wise segmentation directly.…”
Section: Object Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…We apply our object segmentation approach to RGB images from the Kinect V2 [20]. This approach is able to produce pixel-(or point-)wise segmentation directly.…”
Section: Object Segmentationmentioning
confidence: 99%
“…The pretrained RefineNet network from the semantic segmentation is used to extract features. To generate the ground truth poses for training the network, the data acquisition pipeline described in [20] was extended to record turntable poses automatically and fuse captures with different object poses or different objects with minimal user intervention.…”
Section: Pose Estimationmentioning
confidence: 99%
“…Those are composed of a small number of captured background images which are augmented randomly with inserted objects. This approach follows Schwarz et al [9] closely, with the exception that the inserted object segments are rendered from CAD meshes using the open-source Blender renderer. The core of the model consists of four ResNet blocks.…”
Section: Perceptionmentioning
confidence: 99%
“…At inference time, also following Schwarz et al [9], we postprocess the semantic segmentation to find individual object contours. The dominant object is found using the pixel count and is extracted from the input image for further processing.…”
Section: Perceptionmentioning
confidence: 99%
“…For robotic manipulation, pixel-wise object segmentation is a crucial component. Previous work utilizes semantic segmentation models for pick-and-place of various objects [8]- [13]. Since semantic segmentation can not segment different instances in the same class, the work assumes that same class objects are not closely located and can be segmented by clustering.…”
Section: Introductionmentioning
confidence: 99%