2020
DOI: 10.1049/el.2019.2856
|View full text |Cite
|
Sign up to set email alerts
|

Point‐selection and multi‐level‐point‐feature fusion‐based 3D point cloud classification

Abstract: Recent years, the research on object classification based on threedimensional (3D) point cloud pays more attention to extract the features from point sets directly. PointNet++ is the latest network structure for 3D classification which has achieved acceptable results. Although it has achieved acceptable results, there are still two problems: (i) The farthest point sampling (FPS) algorithm in PointNet++ ignores the fact that the feature of each point contributes differently to the classification and segmentatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Point cloud data is different from other data, it is disordered, and the same point may have many different manifestations, which leads to the traditional deep learning that cannot process the point cloud data, and the point cloud data has certain sparsity, and it is difficult to use the deep learning method to process the sparse point cloud when the collection device collects the point cloud data. As shown in the figure, the point cloud consists of four points, f a , f b , f c , and f d , and the point cloud data collected by different devices may have different orders, and the traditional methods will incorrectly identify the point clouds with different orders as different classes [16][17][18][19].…”
Section: Introductionmentioning
confidence: 99%
“…Point cloud data is different from other data, it is disordered, and the same point may have many different manifestations, which leads to the traditional deep learning that cannot process the point cloud data, and the point cloud data has certain sparsity, and it is difficult to use the deep learning method to process the sparse point cloud when the collection device collects the point cloud data. As shown in the figure, the point cloud consists of four points, f a , f b , f c , and f d , and the point cloud data collected by different devices may have different orders, and the traditional methods will incorrectly identify the point clouds with different orders as different classes [16][17][18][19].…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, the accuracy and robustness of point cloud registration methods based on deep learning have achieved remarkable enhancements with the advancements in hardware and deep learning technology [2,3]. Nevertheless, deep learning methods consume massive computational resources when processing light detection and ranging (LiDAR) point cloud directly, prompting these methods to downsampling the original point cloud prior to input into the network to alleviate this problem [4]. The matching of features is crucial for deep learning-based point cloud registration [5].…”
mentioning
confidence: 99%
“…As a result, the motion-based methods can ensure the robustness of the tracker against interference and changes in appearance. In contrast to existing methods [13][14][15], in this letter, we enhance feature learning by introducing the point position embedding module followed by a self-attention coding module. These modules allow us to capture the spatial relationship between points within point clouds more effectively.…”
mentioning
confidence: 99%