2020
DOI: 10.1109/tip.2019.2957935
|View full text |Cite
|
Sign up to set email alerts
|

Deep Unsupervised Learning of 3D Point Clouds via Graph Topology Inference and Filtering

Abstract: We propose a deep autoencoder with graph topology inference and filtering to achieve compact representations of unorganized 3D point clouds in an unsupervised manner. Many previous works discretize 3D points to voxels and then use latticebased methods to process and learn 3D spatial information; however, this leads to inevitable discretization errors. In this work, we try to handle raw 3D points without such compromise. The encoder of the proposed networks adopts similar architectures as in PointNet, which is … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 61 publications
(32 citation statements)
references
References 65 publications
(111 reference statements)
0
32
0
Order By: Relevance
“…The folding method uses MLPs to transform the 2D grid points to the surface of the point clouds. However, this results in a default and uniform plane pattern for the generated 3D points [17]. So, we need Fig.…”
Section: D Grid Transformation Networkmentioning
confidence: 99%
“…The folding method uses MLPs to transform the 2D grid points to the surface of the point clouds. However, this results in a default and uniform plane pattern for the generated 3D points [17]. So, we need Fig.…”
Section: D Grid Transformation Networkmentioning
confidence: 99%
“…MLP based completion networks: Pioneered by PointNet [24], many approaches are introduced to work directly on point cloud to avoid geometric information loss [3,36,43,44]. These methods apply the shared multi-layer perceptron (MLP) to extract features and the max-pooling layer to solve the permutation invariant problem.…”
Section: Related Workmentioning
confidence: 99%
“…Fig. 12 showed their GNN-based auto-encoder (GAE) [192], [193] to encode 3D point clouds into a limited number of latent variables. One of the benefits of the GAE is that it allows graph signal reconstruction from a limited number of latent variables without requiring additional metadata.…”
Section: B Applied Deep Neural Networkmentioning
confidence: 99%