2021
DOI: 10.3390/s21134279
|View full text |Cite
|
Sign up to set email alerts
|

A Saliency-Based Sparse Representation Method for Point Cloud Simplification

Abstract: High-resolution 3D scanning devices produce high-density point clouds, which require a large capacity of storage and time-consuming processing algorithms. In order to reduce both needs, it is common to apply surface simplification algorithms as a preprocessing stage. The goal of point cloud simplification algorithms is to reduce the volume of data while preserving the most relevant features of the original point cloud. In this paper, we present a new point cloud feature-preserving simplification algorithm. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 26 publications
0
9
0
Order By: Relevance
“…For structured point clouds, Markovic et al [36] introduced a feature-sensitive subsampling technique based on insensitive support vector regression, which can efficiently locate and save points in high-curvature areas while minimizing points in flat areas. Leal et al [37] constructed eigenvectors by calculating the normal vector and curvature of each point to estimate the saliency of the points and used the saliency values as a guide for point cloud simplification. Along with standard features such as normal and curvature, it also contains point cloud subsampling that uses characteristics such as edge form and density.…”
Section: Point Cloud Subsamplingmentioning
confidence: 99%
“…For structured point clouds, Markovic et al [36] introduced a feature-sensitive subsampling technique based on insensitive support vector regression, which can efficiently locate and save points in high-curvature areas while minimizing points in flat areas. Leal et al [37] constructed eigenvectors by calculating the normal vector and curvature of each point to estimate the saliency of the points and used the saliency values as a guide for point cloud simplification. Along with standard features such as normal and curvature, it also contains point cloud subsampling that uses characteristics such as edge form and density.…”
Section: Point Cloud Subsamplingmentioning
confidence: 99%
“…Even though high-precision DTMs generally do not need the full density of the current point cloud [ 24 , 40 , 76 , 77 , 78 , 79 ], redundant points do not negatively influence the resulting DTM. However, redundant points can significantly increase the processing time or storage requirements [ 80 , 81 , 82 ], causing practical issues for many applications.…”
Section: Point Density Variationsmentioning
confidence: 99%
“…Decimation techniques range from simple ones, such as random sampling, to complex decimations based on shape of the objects described by the point cloud [ 82 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 ]. Random sampling can be based on the ordinal number of a point within the point cloud (count-based decimation).…”
Section: Reducing Point Density Variations In Point Cloudsmentioning
confidence: 99%
“…This kind of method can directly simplify the point cloud data as a whole, and the efficiency is relatively high. The second category is to divide the original point cloud data into different grids [26,27]. By simplifying the point cloud data in each grid, such as the uniform grid method, this method requires grid construction before data simplification.…”
Section: Pds-algorithmmentioning
confidence: 99%