2021
DOI: 10.48550/arxiv.2102.04530
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

(AF)2-S3Net: Attentive Feature Fusion with Adaptive Feature Selection for Sparse Semantic Segmentation Network

Abstract: Autonomous robotic systems and self driving cars rely on accurate perception of their surroundings as the safety of the passengers and pedestrians is the top priority. Semantic segmentation is one of the essential components of road scene perception that provides semantic information of the surrounding environment. Recently, several methods have been introduced for 3D LiDAR semantic segmentation. While they can lead to improved performance, they are either afflicted by high computational complexity, therefore … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 35 publications
0
9
0
Order By: Relevance
“…Multi-view fusion-based methods combine voxel-based, projection-based and/or point-wise operations for LiDAR point clouds segmentation. To extract more semantic information, some recent methods [35], [36], [37], [38], [39], [40], [41], [7], [8] blend two or more different views together. For instance, [38], [39] combine point-wise information from BEV and range-image in early-stage, and then feed it to the subsequent network.…”
Section: B Lidar Point Cloud Semantic Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…Multi-view fusion-based methods combine voxel-based, projection-based and/or point-wise operations for LiDAR point clouds segmentation. To extract more semantic information, some recent methods [35], [36], [37], [38], [39], [40], [41], [7], [8] blend two or more different views together. For instance, [38], [39] combine point-wise information from BEV and range-image in early-stage, and then feed it to the subsequent network.…”
Section: B Lidar Point Cloud Semantic Segmentationmentioning
confidence: 99%
“…2 -S3Net [7] use point-voxel fusion scheme to achieve better segmentation results. RPVNet [8] proposes a deep fusion network to fuse range-point-voxel three views by a gated fusion mechanism.…”
Section: B Lidar Point Cloud Semantic Segmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…We only compare with methods that report both accuracy and runtime. The methods in comparison are: CenterPoint [34](CP), HotSpotNet [6](H), CVCNet [5](CVC), PointPainting [28](PNT), PointPillars [15](PP),SAPNET [33](S), AF2S3Net [8](A), Cylinder3D [36](C3D), PolarNet [35](PLN),SPVNAS [27](SPV), SalsaNext [9]+CBGS [40](S+C), Po-larNet+CBGS(P+C) and PaP [39](PaP). We color our methods in green and other methods in red.…”
Section: How Streaming Models Enlarge Contextmentioning
confidence: 99%
“…It lacks the ability to process more general unordered point clouds, but it shows practical advantages such as better performance in terms of both the speed and accuracy [6], [7]. To chase a better performance, recent researchers further design models by combining multiview projections or voxelization with point-wise features [8], [9], [10].…”
Section: Introductionmentioning
confidence: 99%