2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00634
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models

Abstract: BedroomsLiving RoomsOffices Bathrooms Figure 1. Synthetic virtual scenes generated by our method. Our model can generate a large variety of such scenes, as well as complete partial scenes, in under two seconds per scene. This performance is enabled by a pipeline of multiple deep convolutional generative models which analyze a top-down representation of the scene. AbstractWe present a new, fast and flexible pipeline for indoor scene synthesis that is based on deep convolutional generative models. Our method ope… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
109
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 126 publications
(120 citation statements)
references
References 24 publications
0
109
0
1
Order By: Relevance
“…A living-room-first strategy based on Refs. [95] and [90] was used to determine room connections and positions; it improves the plausibility of resulted floor plans. An encoder-decoder network was then applied to predict wall positions.…”
Section: D Scene Compositionmentioning
confidence: 99%
“…A living-room-first strategy based on Refs. [95] and [90] was used to determine room connections and positions; it improves the plausibility of resulted floor plans. An encoder-decoder network was then applied to predict wall positions.…”
Section: D Scene Compositionmentioning
confidence: 99%
“…More recently, with the availability of large datasets of 3D environments such as SUNCG [Song et al 2017], learning-based approaches have become popular. A variety of approaches have been proposed using: human-centric probabilistic grammars [Qi et al 2018], Generative Adversarial Networks trained on a matrix representation of present scene objects , recursive neural networks trained to sample 3D scene hierarchies , and convolutional neural networks (CNNs) trained on top-down image representations of rooms [Ritchie et al 2019;Wang et al 2018]. Our system uses the fast CNN modules of the latter image-based method to instantiate relationship graphs, modified significantly to work with a relationship graph as input.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Using neural networks provides robustness to the noise present in relation graphs both in the training data and for the output graphs of our generative model. We adapt modules from recent work on scene synthesis [Ritchie et al 2019], modifying them to take the graph as input and to attempt to adhere to the structure that it mandates. Section 6 describes these modules, and our overall scene instantiation search procedure, in more detail.…”
Section: Overviewmentioning
confidence: 99%
See 2 more Smart Citations