2021
DOI: 10.48550/arxiv.2106.11795
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepMesh: Differentiable Iso-Surface Extraction

Abstract: Geometric Deep Learning has recently made striking progress with the advent of continuous Deep Implicit Fields. They allow for detailed modeling of watertight surfaces of arbitrary topology while not relying on a 3D Euclidean grid, resulting in a learnable parameterization that is unlimited in resolution. Unfortunately, these methods are often unsuitable for applications that require an explicit mesh-based surface representation because converting an implicit field to such a representation relies on the Marchi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…[50] model the problem of triangulation locally by predicting a logarithmic map which allows triangulation in 2D using non-differentiable Delaunay triangulation. Finally, some recent work focus on using the neural implicit representation as the core representation for differentiable meshing or learning [16,48,26]. Neural Marching Cubes [16] implements a data-driven approach to improve the mesh quality at sharp edges by defining a learnable representation to differentiate between the topological configurations of marching cubes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…[50] model the problem of triangulation locally by predicting a logarithmic map which allows triangulation in 2D using non-differentiable Delaunay triangulation. Finally, some recent work focus on using the neural implicit representation as the core representation for differentiable meshing or learning [16,48,26]. Neural Marching Cubes [16] implements a data-driven approach to improve the mesh quality at sharp edges by defining a learnable representation to differentiate between the topological configurations of marching cubes.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, our method operates in continuous space and therefore on arbitrary resolutions. DeepMesh [26] uses a trick, i.e., an additional forward pass on all mesh vertex locations for computing gradients with respect to an underlying implicit representation without the need to make the meshing differentiable. They use a non-differentiable marching cubes algorithm to generate the output and define loss functions directly on the obtained mesh.…”
Section: Related Workmentioning
confidence: 99%
“…Thus they differ markedly from the present work which uses the latent surface representation as a prior. The work [25] has used the latent representation to solve shape optimization problems, but the forward problem is modeled with a neural network. This makes heavy regularization crucial, and thus the optimized shape can only differ from the initial guess slightly.…”
Section: Introductionmentioning
confidence: 99%
“…It allows us to represent various shapes with the number of parameters smaller than the number of grid points of interest. Contrary to the deep learning shape optimization frameworks, such as [20], [21], where the Marching Cubes algorithm [22], [23] extracts the surface from the neural SDF on some prede-Fig. 2.…”
Section: Introductionmentioning
confidence: 99%