2022
DOI: 10.48550/arxiv.2204.02394
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SE(3)-Equivariant Attention Networks for Shape Reconstruction in Function Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
1
0
Order By: Relevance
“…For example, equivariant networks with SE(3)-equivariance are developed to process 3D data such as point clouds [10], meshes [13], and voxels [35]. However, due to the added complexity, most equivariant networks for 3D perception are restricted to relatively simple tasks with small-scale inputs, such as object-wise classification, registration, part segmentation, and reconstruction [30,46,9]. In the following, we will review recent progress in extending equivariance to large-scale outdoor 3D perception tasks.…”
Section: Equivariant Learningmentioning
confidence: 99%
“…For example, equivariant networks with SE(3)-equivariance are developed to process 3D data such as point clouds [10], meshes [13], and voxels [35]. However, due to the added complexity, most equivariant networks for 3D perception are restricted to relatively simple tasks with small-scale inputs, such as object-wise classification, registration, part segmentation, and reconstruction [30,46,9]. In the following, we will review recent progress in extending equivariance to large-scale outdoor 3D perception tasks.…”
Section: Equivariant Learningmentioning
confidence: 99%
“…c) Equivariant neural networks: Chatzipantazis et al [203] introduced an SE(3)-equivariant coordinate-based attention network called TF-ONet for 3-D surface reconstruction. Local shape modeling and equivariance are the two core design principles of this method.…”
Section: ) Implicit Neural Representation Based On Variants Of Sdf Or...mentioning
confidence: 99%