2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.85
|View full text |Cite
|
Sign up to set email alerts
|

SnapNet-R: Consistent 3D Multi-view Semantic Labeling for Robotics

Abstract: In this paper we present a new approach for semantic recognition in the context of robotics. When a robot evolves in its environment, it gets 3D information given either by its sensors or by its own motion through 3D reconstruction. Our approach uses (i) 3D-coherent synthesis of scene observations and (ii) mix them in a multi-view framework for 3D labeling. (iii) This is efficient locally (for 2D semantic segmentation) and globally (for 3D structure labeling). This allows to add semantics to the observed scene… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(47 citation statements)
references
References 35 publications
0
47
0
Order By: Relevance
“…Deep Learning on Point Clouds. To take advantage of the strong representation capability of classic CNNs, a 3D point cloud is first projected into multiview rendering images in [33,31,27,9], on which the well-designed CNNs for 2D images can be applied. But part of contextual information in point cloud is left behind during the projection process.…”
Section: Related Workmentioning
confidence: 99%
“…Deep Learning on Point Clouds. To take advantage of the strong representation capability of classic CNNs, a 3D point cloud is first projected into multiview rendering images in [33,31,27,9], on which the well-designed CNNs for 2D images can be applied. But part of contextual information in point cloud is left behind during the projection process.…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by feature learning approaches based on deep learning used for image recognition, researchers proposed some methods to process point cloud. These methods [21][22][23][24] transform the point clouds to voxel-based representation or 2D images as input to deep neural networks, obtaining better recognition results compared with traditional methods based on handcrafted features. However, these methods generally miss much spatial information when transforming and take a long time to train the model.…”
Section: Related Workmentioning
confidence: 99%
“…Based on this, Boulch et al (2018) introduced the SnapNet, that renders images from randomly chosen views and uses the U-Net to classify the images, before applying a backprojection to the 3D pointcloud. Guerry et al (2017) optimize the SnapNet for the application on RGB-D data in robotics.…”
Section: Deep Learning-based Semantic Labelingmentioning
confidence: 99%