2022
DOI: 10.1007/978-3-031-20062-5_3
|View full text |Cite
|
Sign up to set email alerts
|

MeshMAE: Masked Autoencoders for 3D Mesh Data Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…The framework attempted to represent optimal features employing both spatial discontinuity specificities, like vertices and neighboring indices. In addition to MeshNet, mesh representation methods have been proposed by several other approaches [18][19][20].…”
Section: D Object Feature Learning In Each Modalitymentioning
confidence: 99%
“…The framework attempted to represent optimal features employing both spatial discontinuity specificities, like vertices and neighboring indices. In addition to MeshNet, mesh representation methods have been proposed by several other approaches [18][19][20].…”
Section: D Object Feature Learning In Each Modalitymentioning
confidence: 99%
“…Most generators using the same type of representation share similar output layers, such as voxel-based works [15,16,28,32], point-cloud-based works [49,147,176], MeshCNN [149] and SubdivNet [151] for mesh representation learning, and OccNet [103] and DeepSDF [102,104] for implicit representations. Very recently, the transformer mechanism has also been employed in 3D deep learning [141,226] with demonstrated advantages compared to classic 3D convolutional networks. The backbone network architecture usually determines the shape representation, which in turn affects the design of the generator.…”
Section: D Backbone Network Designmentioning
confidence: 99%
“…MeshMAE [119] (2022.11) focuses on processing 3D mesh data. The research here is mainly on how to handle meshes and utilize MAE.…”
Section: E 3d and Point Clouds 1) 3d Imagementioning
confidence: 99%