2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01012
|View full text |Cite
|
Sign up to set email alerts
|

Sign-Agnostic Implicit Learning of Surface Self-Similarities for Shape Modeling and Reconstruction from Raw Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 23 publications
0
13
0
Order By: Relevance
“…Motivated from deep image prior [63], Deep Geometric Prior (DGP) [64] verifies the efficacy of deep networks as a prior for geometric surface modeling, even when the networks are not trained. Latter on, Point2Mesh [65] and SAIL-S3 [66] extend the global modeling adopted in [64] as local ones, where the former constructs its local, implicit functions as a weight-shared MeshCNN [67] and the later method constructs them as a weightshared Multi-Layer Perceptron (MLP). Deep Manifold Prior [68] delivers mathematical analyses on such modeling properties for MLP as well as convolutional networks.…”
Section: Modeling Priorsmentioning
confidence: 99%
“…Motivated from deep image prior [63], Deep Geometric Prior (DGP) [64] verifies the efficacy of deep networks as a prior for geometric surface modeling, even when the networks are not trained. Latter on, Point2Mesh [65] and SAIL-S3 [66] extend the global modeling adopted in [64] as local ones, where the former constructs its local, implicit functions as a weight-shared MeshCNN [67] and the later method constructs them as a weightshared Multi-Layer Perceptron (MLP). Deep Manifold Prior [68] delivers mathematical analyses on such modeling properties for MLP as well as convolutional networks.…”
Section: Modeling Priorsmentioning
confidence: 99%
“…A solution is to split the input points on a regular 3D grid and to optimize one latent vector per voxel [12] (DeepLS), possibly from overlapping input patches. Patch splitting can also be irregular and optimization-driven to favor selfsimilarities, with a global post-optimization to flip inconsistent local signs [129] (SAIL-S3). But whether these methods optimize only the latent vectors or a whole network as well, for patch decoding, they make surface reconstruction significantly slower, leading to reduced test sets.…”
Section: Related Work 21 3d Representationsmentioning
confidence: 99%
“…Departing from occupancy or distance fields, ShapeGF [11] models a shape by learning the gradient field of its log-density, then samples points on high likelihood regions of the shape and meshes them. Other work also study the decomposition of shapes and implicit surfaces into parts [35,34,84,28,48,109], possibly overfitting networks to generate or render a single object or scene [117,101,65,129,103,73,126].…”
Section: Related Work 21 3d Representationsmentioning
confidence: 99%
“…Iso-points [83] tried to impose geometry-aware sampling and regularization in the learning. Moreover, implicit functions can also be learned from point clouds with additional constraints, such as geometric regularization [19], sign agnostic learning with a specially designed loss function [1], sign agnostic learning with local surface self-similarities and post sign processing [71,85], constraints on gradients [2] or a divergence penalty [4].…”
Section: Related Workmentioning
confidence: 99%