2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00464
|View full text |Cite
|
Sign up to set email alerts
|

PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows

Abstract: Figure 1: Our model transforms points sampled from a simple prior to realistic point clouds through continuous normalizing flows. The videos of the transformations can be viewed on our project website: https://www.guandaoyang.com/ PointFlow/. AbstractAs 3D point clouds become the representation of choice for multiple vision and graphics applications, the ability to synthesize or reconstruct high-resolution, high-fidelity point clouds becomes crucial. Despite the recent success of deep learning models in discri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
485
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 558 publications
(490 citation statements)
references
References 31 publications
3
485
2
Order By: Relevance
“…Recently, there has been progress in the development of flow-based generative models which can be trained to directly produce samples from a given probability distribution; early success has been demonstrated in theories of bosonic matter, spin systems, molecular systems, and for Brownian motion [24][25][26][27][28][29][30][31][32][33][34]. This progress builds on the great success of flow-based approaches for image, text, and structured object generation [35][36][37][38][39][40][41][42], as well as non-flow-based machine learning techniques applied to sampling for physics [43][44][45][46][47][48]. If flow-based algorithms can be designed and implemented at the scale of state-of-the-art calculations, they would enable efficient sampling in lattice theories that are currently hindered by CSD.…”
mentioning
confidence: 99%
“…Recently, there has been progress in the development of flow-based generative models which can be trained to directly produce samples from a given probability distribution; early success has been demonstrated in theories of bosonic matter, spin systems, molecular systems, and for Brownian motion [24][25][26][27][28][29][30][31][32][33][34]. This progress builds on the great success of flow-based approaches for image, text, and structured object generation [35][36][37][38][39][40][41][42], as well as non-flow-based machine learning techniques applied to sampling for physics [43][44][45][46][47][48]. If flow-based algorithms can be designed and implemented at the scale of state-of-the-art calculations, they would enable efficient sampling in lattice theories that are currently hindered by CSD.…”
mentioning
confidence: 99%
“…Learned Shape Representations: To leverage shape priors, shape reconstruction methods began representing shape as a learned feature vector, with a trained decoder to a mesh [33,13,38,14,16], point cloud [10,19,43], voxel grid [9,40,6,39], or octree [34,30,29]. Most recently, representing shape as a vector with an implicit surface function decoder has become popular, with methods such as Oc-cNet [21], ImNet [8], DeepSDF [24], and DISN [42].…”
Section: Related Workmentioning
confidence: 99%
“…In this work, registration between different spaces is performed with the provided sketch and the internal knowledge that comes from the training procedure. Similarly, the generative model of Yang et al [69] uses a variation of an autoencoder architecture to generate 3D point clouds by modeling them as a distribution of distributions. Concretely, their method learns the distribution of shapes at the first level, and the distribution of points given a shape at the second level.…”
Section: Target Levelmentioning
confidence: 99%