2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00823
|View full text |Cite
|
Sign up to set email alerts
|

Composite Shape Modeling via Latent Space Factorization

Abstract: We present a novel neural network architecture, termed Decomposer-Composer, for semantic structure-aware 3D shape modeling. Our method utilizes an auto-encoderbased pipeline, and produces a novel factorized shape latent space, where the semantic structure of the shape collection translates into a data-dependent sub-space factorization, and where shape composition and decomposition become simple linear operations on the embedding coordinates. We further propose to model shape assembly using an explicit learned … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 58 publications
(39 citation statements)
references
References 41 publications
0
39
0
Order By: Relevance
“…In general, 3D shapes can be generated in a part-conditioned or unconditioned manner. Part-conditioned means that we employ an additional dataset with parts labels to train a part-wise variational autoencoder to encode each predefined (or learned) part into a latent distribution [Dubrovina et al 2019;Mo et al 2019Mo et al , 2020Wu et al 2020], so that new shapes can be generated by sampling from the distributions of parts and composing them. On the other hand, unconditioned means that we directly synthesize 3D shapes from a random distribution, so the generation process has full freedom and the generated samples are not limited by the part-annotation data or any pre-defined structure relation.…”
Section: Related Workmentioning
confidence: 99%
“…In general, 3D shapes can be generated in a part-conditioned or unconditioned manner. Part-conditioned means that we employ an additional dataset with parts labels to train a part-wise variational autoencoder to encode each predefined (or learned) part into a latent distribution [Dubrovina et al 2019;Mo et al 2019Mo et al , 2020Wu et al 2020], so that new shapes can be generated by sampling from the distributions of parts and composing them. On the other hand, unconditioned means that we directly synthesize 3D shapes from a random distribution, so the generation process has full freedom and the generated samples are not limited by the part-annotation data or any pre-defined structure relation.…”
Section: Related Workmentioning
confidence: 99%
“…[11,17,20] introduced frameworks that assigned individual probability distributions to parts of shapes following flattened hierarchies. Similarly, the latent codes learnt in [5] were factored according to individual components of shapes, and [30] introduced a Seq2Seq model for flattened object hierarchies.…”
Section: Structured Shape Generationmentioning
confidence: 99%
“…They proved that the part-based model encourages the generator to create new data unseen in the training set. Dubrovina et al [4] handled the composition and decomposition of each part as a simple linear operation on the factorized embedding space. They used projection matrices to split full object encodings into part encodings.…”
Section: Part-based Shape Generative Modelsmentioning
confidence: 99%
“…To composite each part, both [3] and [4] compute per-part affine transformation. Our model does not utilize spatial transformer networks [13] nor computes affine transformation to combinate each part of the data.…”
Section: Part-based Shape Generative Modelsmentioning
confidence: 99%
See 1 more Smart Citation