2021
DOI: 10.3390/s21051625
|View full text |Cite
|
Sign up to set email alerts
|

Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion

Abstract: The semantic segmentation of small objects in point clouds is currently one of the most demanding tasks in photogrammetry and remote sensing applications. Multi-resolution feature extraction and fusion can significantly enhance the ability of object classification and segmentation, so it is widely used in the image field. For this motivation, we propose a point cloud semantic segmentation network based on multi-scale feature fusion (MSSCN) to aggregate the feature of a point cloud with different densities and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 49 publications
0
9
0
Order By: Relevance
“…The most common application of image fusion in LiDAR remote sensing is the fusion of 3D point clouds and RGB images to train a deep learning model for classification and segmentation tasks [ 20 , 21 , 22 ]. The features extracted from both types of data are used to enhance the performance of each class in the application of each class.…”
Section: Point Cloud Computingmentioning
confidence: 99%
“…The most common application of image fusion in LiDAR remote sensing is the fusion of 3D point clouds and RGB images to train a deep learning model for classification and segmentation tasks [ 20 , 21 , 22 ]. The features extracted from both types of data are used to enhance the performance of each class in the application of each class.…”
Section: Point Cloud Computingmentioning
confidence: 99%
“…For instance, PointNet++ [14] exploited Furthest Point Sampling (FPS) to hierarchically downsample point clouds and iteratively extract features from PointNet in each sampling layer. In order to describe large-scale scenes from multi-resolutions, MSSCN [30] concatenated point features with different densities, PointSIFT [31] paid attention to encoding both multi-orientations and multi-scales for local details, and PointCNN [32] employed a fully convolutional point network with a series of abstraction layers, feature learners at different scales, and a merging layer. When the random sampling strategy in large-scale scenes is time-consuming since it works on the original points, the use of supervoxels [33], which were inspired by superpixels in 2D image processing, greatly cuts down the number of points and provides a more natural and compact representation for local operations.…”
Section: Deep Learning Network For Semantic Segmentationmentioning
confidence: 99%
“…Camera data is fused with LiDAR data in order to detect better objects [ 26 ]. In some works, the detection of objects is approached by performing semantic segmentation on LiDAR data [ 29 , 30 ] or camera-LiDAR fused data [ 31 ].…”
Section: Related Workmentioning
confidence: 99%