2023
DOI: 10.1109/tmm.2022.3206664
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Point Cloud Representation Learning via Separating Mixed Shapes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 58 publications
0
3
0
Order By: Relevance
“…The second step is to complete reconstructing the occluded point cloud, and the final step is to use the encoder weights as the initialization for the downstream point cloud task. Sun et al [191] developed a novel self-supervised learning technique called Mixing and Disentangling (MD) for learning 3D point cloud representations in response to the enormous success of self-supervised learning. The authors combined two input shapes and demand that the model learn to distinguish the inputs from the mixed shape.…”
Section: Self-supervised Methodsmentioning
confidence: 99%
“…The second step is to complete reconstructing the occluded point cloud, and the final step is to use the encoder weights as the initialization for the downstream point cloud task. Sun et al [191] developed a novel self-supervised learning technique called Mixing and Disentangling (MD) for learning 3D point cloud representations in response to the enormous success of self-supervised learning. The authors combined two input shapes and demand that the model learn to distinguish the inputs from the mixed shape.…”
Section: Self-supervised Methodsmentioning
confidence: 99%
“…Therefore, the model was utilized for diverse downstream tasks from classification to segmentation. Furthermore, Sun et.al [35] also focused on a pre-train model using mixing and disentangling for point clouds. Specifically, the work performed self-supervised learning and generated new point clouds by mixing multiple samples instead of training on large datasets.…”
Section: B Point-based Methodsmentioning
confidence: 99%
“…Differently, generative pre-training methods construct a pretext task by applying operations such as masking (Yu et al 2022;Wang et al 2021) and mixing (Sun et al 2022) to the original point cloud scene. Subsequently, the corresponding decoder and loss function are designed for a complete pre-training.…”
Section: Related Workmentioning
confidence: 99%