2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017
DOI: 10.1109/iros.2017.8206470
|View full text |Cite
|
Sign up to set email alerts
|

SegICP: Integrated deep semantic segmentation and pose estimation

Abstract: Recent robotic manipulation competitions have highlighted that sophisticated robots still struggle to achieve fast and reliable perception of task-relevant objects in complex, realistic scenarios. To improve these systems' perceptive speed and robustness, we present SegICP, a novel integrated solution to object recognition and pose estimation. SegICP couples convolutional neural networks and multi-hypothesis point cloud registration to achieve both robust pixel-wise semantic segmentation as well as accurate an… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
81
0
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 118 publications
(83 citation statements)
references
References 32 publications
1
81
0
1
Order By: Relevance
“…With the availability of powerful commodity GPUs, and fast detection al-gorithms [27,38], these methods are suitable for realtime object detection required in robotics. More recently, deep learning based approaches in computer vision are being adopted for the task of pose estimation of specific objects [33,53,54]. Improving instance detection and pose estimation in warehouses will be signifcantly useful for the perception pipeline in systems trying to solve the Amazon Picking Challenge [7].…”
Section: Related Workmentioning
confidence: 99%
“…With the availability of powerful commodity GPUs, and fast detection al-gorithms [27,38], these methods are suitable for realtime object detection required in robotics. More recently, deep learning based approaches in computer vision are being adopted for the task of pose estimation of specific objects [33,53,54]. Improving instance detection and pose estimation in warehouses will be signifcantly useful for the perception pipeline in systems trying to solve the Amazon Picking Challenge [7].…”
Section: Related Workmentioning
confidence: 99%
“…The dataset consists of indoor scenes with 10 categories of densely annotated objects relevant to an automotive oil change, such as oil bottles, funnels, and engines. Images were captured with one of three sensor types (Microsoft Kinect1, Microsoft Kinect2, or Asus Xtion Pro Live) and were automatically annotated with object poses and pixelwise instance masks using either the motion capture setup described in [13] or the LabelFusion [15] pipeline.…”
Section: A Datasetsmentioning
confidence: 99%
“…Object poses are expensive to annotate and were often hand annotated in the past [4], [12]. More recently, automatic annotation methods have been proposed using motion capture [13] or 3D scene reconstruction [14], [15], but these methods still require significant human labor and are not able to generate significant variability in pose since objects must remain stationary during data capture. To address this issue, we propose a novel pose estimation approach that leverages synthetic pose data.…”
Section: Introductionmentioning
confidence: 99%
“…The impressive development of deep neural networks, especially convolutional neural networks (CNNs; Garcia‐Garcia, Orts‐Escolano, Oprea, Villena‐Martinez, & Garcia‐Rodriguez, ), has led to a significant improvement in semantic segmentation approaches in recent years. Many robotic applications benefited from these improvements, for example, autonomous driving (Luc, Neverova, Couprie, Verbeek, & LeCun, ) and object detection and manipulation (Wong et al, ). For training, however, these methods require extensive amounts of pixel‐level labeled data.…”
Section: Introductionmentioning
confidence: 99%
“…manipulation (Wong et al, 2017). For training, however, these methods require extensive amounts of pixel-level labeled data.…”
mentioning
confidence: 99%