2020
DOI: 10.1109/lra.2020.3007440
|View full text |Cite
|
Sign up to set email alerts
|

3D-MiniNet: Learning a 2D Representation From Point Clouds for Fast and Efficient 3D LIDAR Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
66
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 127 publications
(67 citation statements)
references
References 29 publications
0
66
0
1
Order By: Relevance
“…AMVNet [45] considered the class-prediction uncertainty between different view's networks, and these uncertain points are passed into an extra network to obtain more robust predictions. 3D-MiniNet [46] learns a 2D representation from the raw points through a projection which extracts local and global information from the 3D data, and then feeds it to a 2D fully convolutional neural network to generate semantic predictions.…”
Section: ) Deep Learning Based On Regular Representationsmentioning
confidence: 99%
“…AMVNet [45] considered the class-prediction uncertainty between different view's networks, and these uncertain points are passed into an extra network to obtain more robust predictions. 3D-MiniNet [46] learns a 2D representation from the raw points through a projection which extracts local and global information from the 3D data, and then feeds it to a 2D fully convolutional neural network to generate semantic predictions.…”
Section: ) Deep Learning Based On Regular Representationsmentioning
confidence: 99%
“…Due to the irregularities of the point cloud, many 3D detection algorithms convert the point cloud into 2D images to extract features in a 2D convolution manner and detect the method [18][19][20]. Zeng et al [21] proposed a Real-Time 3D (RT3D) algorithm for converting 3D point clouds into 2D grids.…”
Section: Lidarmentioning
confidence: 99%
“…The 3D-Mininet proposed by Alonso obtained local and global context information from 3D data through multiview projection, and then input these data to a 2D full convolutional neural network (FCNN) to predict semantic labels. Finally, the predicted 2D semantic labels were reprojected into 3D space to obtain high-quality results in a faster and more efficient way [30].…”
Section: Two-dimensional Multiview Methodsmentioning
confidence: 99%