SIGGRAPH Asia 2017 Technical Briefs 2017
DOI: 10.1145/3145749.3145758
|View full text |Cite
|
Sign up to set email alerts
|

Exploring generative 3D shapes using autoencoder networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(48 citation statements)
references
References 3 publications
0
48
0
Order By: Relevance
“…They are commonly used for 3D reconstruction [41,18] but suffer from large-scale memory inefficiency and require a space discretization which loses fine-grained details. Recent works compress 3D shapes using auto-encoders to efficiently handle geometric information [58,56,12]. They typically encode-decode shapes into different representations.…”
Section: Related Workmentioning
confidence: 99%
“…They are commonly used for 3D reconstruction [41,18] but suffer from large-scale memory inefficiency and require a space discretization which loses fine-grained details. Recent works compress 3D shapes using auto-encoders to efficiently handle geometric information [58,56,12]. They typically encode-decode shapes into different representations.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, variational autoencoder (VAE) and generative adversarial network (GAN) are popular generative models used in design optimization, where high-dimensional design variables are encoded in low-dimensional design space [13,14]. In addition, these models are utilized in the design exploration and shape parameterization [8,9].…”
Section: Generative Models For Generative Designmentioning
confidence: 99%
“…Overfitting can be a concern when the training dataset size is not large. While the size of our dataset is comparable to other training datasets described in learning literature [50,2], we would like our approach to be applicable to smaller or evolving datasets of character animation data.…”
Section: Overfittingmentioning
confidence: 99%