2020
DOI: 10.1145/3381866
|View full text |Cite
|
Sign up to set email alerts
|

Deep Generative Modeling for Scene Synthesis via Hybrid Representations

Abstract: Fig. 1. Comparisons between our randomly generated scenes (row 1 and row 3) and their nearest neighbors in the training data (row 2 and row 4). Our synthesized scenes present significant topological and geometrical variations from the training data.We present a deep generative scene modeling technique for indoor environments. Our goal is to train a generative model using a feed-forward neural network that maps a prior distribution (e.g., a normal distribution) to the distribution of primary objects in indoor s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(39 citation statements)
references
References 73 publications
(88 reference statements)
0
39
0
Order By: Relevance
“…Thirdly, image-based feature representation can be applied to encode different aspects of enhancement styles so that convolutional neural networks can be well adopted. For example, [3,33] use top-view rendered images to encode the category or layout information for the furniture objects in a scene. We can extend their idea to encode information for scene styles in different channels of an image.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Thirdly, image-based feature representation can be applied to encode different aspects of enhancement styles so that convolutional neural networks can be well adopted. For example, [3,33] use top-view rendered images to encode the category or layout information for the furniture objects in a scene. We can extend their idea to encode information for scene styles in different channels of an image.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Other recent methods have adapted deep neural networks for scene synthesis. [32] uses a Generative Adversarial Network to generate scenes in an attribute-matrix form (i.e. one column per scene object).…”
Section: Related Workmentioning
confidence: 99%
“…One family of approaches to this problem attempts to place objects via optimization, typically a variant of Markov Chain Monte Carlo, with respect to learned priors, hand‐crafted constraints, or some combination of the two [YYT∗11, MSL∗11, FRS∗12, QZH∗18]. Another family of approaches trains deep generative models to output scenes, using either a latent variable model such as Variational Autoencoder [LPX∗19] or Generative Adversarial Network [ZYM∗20] or using an iterative object‐by‐object approach [WSCR18, RWL19, WLW∗19]. The iterative approaches are most similar to our setting.…”
Section: Background and Related Workmentioning
confidence: 99%