2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00635
|View full text |Cite
|
Sign up to set email alerts
|

Progressive Seed Generation Auto-encoder for Unsupervised Point Cloud Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…Many studies have used deep learning to cluster or organise everyday 3D objects by their shape [31, 44, 45, 46]. These tasks use datasets that have distinct semantic (or human meaningful) classes.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Many studies have used deep learning to cluster or organise everyday 3D objects by their shape [31, 44, 45, 46]. These tasks use datasets that have distinct semantic (or human meaningful) classes.…”
Section: Resultsmentioning
confidence: 99%
“…Following similar procedures to previous researchers [31,44,45,46], we used the trained DFN to extract features from each cell. These features were then used to train a linear SVM.…”
Section: Learned Shape Features Predict Cell Treatmentmentioning
confidence: 99%
“…Currently, self-supervised learning has been extensively studied in point cloud representation learning, and they focus on constructing a pretext task to help the network better learn 3D point cloud representations. A commonly adopted pretext task is to reconstruct the input point cloud from the latent encoding space, which can be implemented through Variational AutoEncoders [29][30][31][32][33][34], Generative Adversarial Learning (GANs) [35,36], Gaussian Mixed Model [12,37], etc. However, these methods are computationally expensive, and rely excessively on reconstructing local details, making it difficult to learn high-level semantic features.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, the encoder of 3DCapsuleNet [49] is designed to process 3D point clouds and aggregate the final latent capsules based on the dynamic routing, followed by the decoder to reconstruct the original point cloud composed of multiple point patches. [45] proposes an autoencoder architecture where the rich point-wise features in multiple stages are progressively produced through the seed generation module. Eckart et al [13] further extend the traditional autoencoder paradigm to form the bottleneck layer being modeled by a discrete generative model.…”
Section: Related Workmentioning
confidence: 99%
“…GraphTER † [16] 87.8 FoldingNet [46] 88.4 PointCapsNet [49] 88.9 Multi-Tasks [19] 89.1 Yang et al [45] 90.9 PointGLR † [34] 91.7…”
Section: Ssl Accuracymentioning
confidence: 99%