2020
DOI: 10.48550/arxiv.2002.10893
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

3D-MiniNet: Learning a 2D Representation from Point Clouds for Fast and Efficient 3D LIDAR Semantic Segmentation

Abstract: LIDAR semantic segmentation, which assigns a semantic label to each 3D point measured by the LIDAR, is becoming an essential task for many robotic applications such as autonomous driving. Fast and efficient semantic segmentation methods are needed to match the strong computational and temporal restrictions of many of these real-world applications.This work presents 3D-MiniNet, a novel approach for LIDAR semantic segmentation that combines 3D and 2D learning layers. It first learns a 2D representation from the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…Specifically, spherical projection provides an efficient way to sample points and the projected images preserve geometric information of point cloud well and can be processed by standard CNNs effectively. A series of methods adopted this approach [45,14,15,16,46,47,48].…”
Section: Deep Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Specifically, spherical projection provides an efficient way to sample points and the projected images preserve geometric information of point cloud well and can be processed by standard CNNs effectively. A series of methods adopted this approach [45,14,15,16,46,47,48].…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…Following prior work by [14], existing methods [14,15,46,16,48,47] stack coordinates (x,y,z), depths and intensities as five channels in the projected images, i.e.…”
Section: Modality Gap and Proposed Solutionmentioning
confidence: 99%
See 1 more Smart Citation
“…Considering the imbalanced spatial distribution of the LiDAR point clouds, PolarNet [21] voxelizes raw point cloud in a BEV polar grid, and the points in each grid is produced by a learnable simplified PointNet. 3D-MiniNet [22] groups points into view frustums. Point-wise method is implemented in each view frustum, and the global context information is mainly extracted with the MiniNet backbone, which is a 2D CNN operating on range image.…”
Section: Related Workmentioning
confidence: 99%
“…The contributions of this work mainly lie in three aspects: (1) We reposition the focus of outdoor LiDAR segmentation from 2D projection to 3D structure, and further investigate the inherent properties (difficulties) of outdoor point cloud. (2) We introduce a new framework to explore the 3D geometric pattern and tackle these difficulties caused by sparsity and varying density, through cylindrical partition and asymmetrical 3D convolution networks.…”
Section: Introductionmentioning
confidence: 99%