2022
DOI: 10.48550/arxiv.2203.13944
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SolidGen: An Autoregressive Model for Direct B-rep Synthesis

Abstract: The Boundary representation (B-rep) format is the de-facto shape representation in computer-aided design (CAD) to model watertight solid objects. Recent approaches to generating CAD models have focused on learning sketch-and-extrude modeling sequences that are executed by a solid modeling kernel in postprocess to recover a B-rep. In this paper we present a new approach that enables learning from and synthesizing B-reps without the need for supervision through CAD modeling sequence data. Our method SolidGen, is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…3D Generative Models. Significant progress has been made in the field of generative models for the creation of 3D shapes in various formats such as voxels [70,9,66,30], CAD [71,29,36,72], implicit representations [42,7,54], meshes [48,19], and point clouds [49,3,75,37,74,34].…”
Section: Related Workmentioning
confidence: 99%
“…3D Generative Models. Significant progress has been made in the field of generative models for the creation of 3D shapes in various formats such as voxels [70,9,66,30], CAD [71,29,36,72], implicit representations [42,7,54], meshes [48,19], and point clouds [49,3,75,37,74,34].…”
Section: Related Workmentioning
confidence: 99%
“…Since the introduction of deep generative networks such as Generative Adversarial Network (GAN) [Goodfellow et al 2014] and Variational Autoencoder (VAE) [Kingma and Welling 2014], developing deep generative models for 3D shapes has attracted immense research interest. Existing works learn to produce 3D shapes in different representations, including voxels Wu et al 2016], point clouds [Achlioptas et al 2018;Cai et al 2020;Li et al 2021;Yang et al 2019], meshes [Gao et al 2021;Hertz et al 2020;Liu et al 2020;Nash et al 2020], implicit functions [Chen and Zhang 2019;Mescheder et al 2019;Park et al 2019], multi-charts [Ben-Hamu et al 2018Groueix et al 2018],structural primitives [Jones et al 2020;Li et al 2017;Mo et al 2019;Wu et al 2020], and parametric models Jayaraman et al 2022;]. Nearly all these methods are trained on a large dataset of category-specific 3D shapes, e.g., ShapeNet [Chang et al 2015].…”
Section: Related Workmentioning
confidence: 99%
“…It is such a laborious process that motivates the development of computer algorithms that create shapes-new, diverse, and highquality 3D shapes created in a fully automatic fashion. Leveraging recent advance in deep learning, research in this direction has been vibrant [Achlioptas et al 2018;Chen and Zhang 2019;Jayaraman et al 2022;Nash et al 2020;Wu et al 2016]: the general theme here Authors' addresses: Rundi Wu, Columbia University, New York City, NY, 10025, USA, rundi.wu@columbia.edu; Changxi Zheng, Columbia University, New York City, NY, 10025, USA, cxz@cs.columbia.edu. is to develop a generative model able to learn from a training dataset to generate 3D shapes.…”
Section: Introductionmentioning
confidence: 99%