2021
DOI: 10.48550/arxiv.2103.12978
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RPVNet: A Deep and Efficient Range-Point-Voxel Fusion Network for LiDAR Point Cloud Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 0 publications
1
10
0
Order By: Relevance
“…To save computational cost, regular representations, including 3d voxels and 2d grids, polar and cylinder grids, and range images (Zhou and Tuzel 2018;Zhang et al 2020;Zhu et al 2020b;Milioto et al 2019;Xu et al 2020) are used to organize sparse points. Recently, hybrid methods (Tang et al 2020;Xu et al 2021;Ye et al 2021) that combine multiple representations are proposed to integrate the advantages of both fine-grained point-wise features and effective feature aggregation of regular representations. Sparse convolution (Graham 2015;Graham, Engelcke, and Van Der Maaten 2018) is also widely used to restrict convolution output only in the active regions, accelerating the volumetric convolution and enabling larger model size.…”
Section: Related Workmentioning
confidence: 99%
“…To save computational cost, regular representations, including 3d voxels and 2d grids, polar and cylinder grids, and range images (Zhou and Tuzel 2018;Zhang et al 2020;Zhu et al 2020b;Milioto et al 2019;Xu et al 2020) are used to organize sparse points. Recently, hybrid methods (Tang et al 2020;Xu et al 2021;Ye et al 2021) that combine multiple representations are proposed to integrate the advantages of both fine-grained point-wise features and effective feature aggregation of regular representations. Sparse convolution (Graham 2015;Graham, Engelcke, and Van Der Maaten 2018) is also widely used to restrict convolution output only in the active regions, accelerating the volumetric convolution and enabling larger model size.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-view fusion-based methods combine voxel-based, projection-based and/or point-wise operations for LiDAR point clouds segmentation. To extract more semantic information, some recent methods [35], [36], [37], [38], [39], [40], [41], [7], [8] blend two or more different views together. For instance, [38], [39] combine point-wise information from BEV and range-image in early-stage, and then feed it to the subsequent network.…”
Section: B Lidar Point Cloud Semantic Segmentationmentioning
confidence: 99%
“…2 -S3Net [7] use point-voxel fusion scheme to achieve better segmentation results. RPVNet [8] proposes a deep fusion network to fuse range-point-voxel three views by a gated fusion mechanism. However, the performance of these methods is also limited due to the LiDAR point clouds lacking rich colors and textures.…”
Section: B Lidar Point Cloud Semantic Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…Cylinder3D is a novelty 3D segmentation approach that uses cylindrical and asymmetrical 3D convolution networks for achieving SOTA performance. In addition, any SOTA 3D segmentators such as Polarnet [37], AF2S3Net [38], RPVNet [39] can be used for the proposed framework. Furthermore, any extra pointwise semantic annotations are not required, we can freely use the 3D object bounding boxes annotations for generating the segmentation labels.…”
Section: A Multi-modal Semantic Segmentationmentioning
confidence: 99%