2021 International Conference on 3D Vision (3DV) 2021
DOI: 10.1109/3dv53792.2021.00113
|View full text |Cite
|
Sign up to set email alerts
|

AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…MeshUDF [85] further extends DeepSDF to reconstruct open surfaces, which predicts unsigned distances of any given space point and generates the surface using their customized marching cubes. More recently, a few methods [86], [87] adopt networks based on transformer [88], [89] to help reconstruction. Note also that learning semantic priors enables reconstruction of surfaces from raw observations that are originally of no or less 3D shape information (e.g., as few as a single RGB image [5], [90], [91]), by training encoders that learn latent shape spaces from such observations.…”
Section: Learning Semantic Priorsmentioning
confidence: 99%
“…MeshUDF [85] further extends DeepSDF to reconstruct open surfaces, which predicts unsigned distances of any given space point and generates the surface using their customized marching cubes. More recently, a few methods [86], [87] adopt networks based on transformer [88], [89] to help reconstruction. Note also that learning semantic priors enables reconstruction of surfaces from raw observations that are originally of no or less 3D shape information (e.g., as few as a single RGB image [5], [90], [91]), by training encoders that learn latent shape spaces from such observations.…”
Section: Learning Semantic Priorsmentioning
confidence: 99%
“…The aforementioned methods can approximate the global implicit functions with [83,21,23,40,7,94] or without blending local implicit functions [37,8,18]. Our method shares the idea of approximating global implicit functions by blending local ones, but our novelty lies in that we allow the neural network to adaptively split shapes so that we can blend parts with spatial surface properties or intrinsic attributes in the latent space well.…”
Section: Related Workmentioning
confidence: 99%