2021
DOI: 10.1145/3450626.3459848
|View full text |Cite
|
Sign up to set email alerts
|

Neural scene graph rendering

Abstract: We present a neural scene graph---a modular and controllable representation of scenes with elements that are learned from data. We focus on the forward rendering problem, where the scene graph is provided by the user and references learned elements. The elements correspond to geometry and material definitions of scene objects and constitute the leaves of the graph; we store them as high-dimensional vectors. The position and appearance of scene objects can be adjusted in an artist-friendly manner via familiar t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…They may declare connectivity between coordinates explicitly such as in meshes, or implicitly such as in Voronoi cells. They too may also be organized into hierarchies, such as within a BVH or scene graph [GSRN21,OMT*21].…”
Section: Hybrid Representationsmentioning
confidence: 99%
“…They may declare connectivity between coordinates explicitly such as in meshes, or implicitly such as in Voronoi cells. They too may also be organized into hierarchies, such as within a BVH or scene graph [GSRN21,OMT*21].…”
Section: Hybrid Representationsmentioning
confidence: 99%
“…However, the decomposition is limited due to the inflexibly structured boundaries or strong assumptions about scenes; self-supervised object-centric learning is a difficult task. Other studies also explored reconstruction with more structured hybrid representations via pipelines specialized to a domain (e.g., traffic scene) [58,22,36] or situation (e.g., each object data is independently accessible) [25,24,92]. Note that this line of work defines and constrains domains or the types of segmentation during or before training, and thus limits the degrees of freedom for editable scenes and objects.…”
Section: Related Workmentioning
confidence: 99%
“…Neural rendering methods aim at synthesizing novel views of objects and scene by learning scene representation from 2D observations in various forms, such as voxels [Lombardi et al 2019;Sitzmann et al 2019a], point clouds , meshes Koltun 2020, 2021], multi-plane images [Mildenhall et al 2019;Tucker and Snavely 2020;Wang et al 2021b] and implicit functions [Mildenhall et al 2020;Niemeyer et al 2020;Sitzmann et al 2019b]. NeRF [Mildenhall et al 2020] uses volume rendering to achieve photo-realistic results; follow up works extend the model to multiple tasks, such as pose estimation [Yen- Chen et al 2021], dense surface reconstruction [Oechsle et al 2021;Wang et al 2021a;Yariv et al 2021] and scene editing [Granskog et al 2021;Yang et al 2021]. Meanwhile, other methods Koltun 2020, 2021] also show impressive free-viewpoint rendering capability in the wild, or scene rendering [DeVries et al 2021;Luo et al 2020] of indoor environments.…”
Section: Related Workmentioning
confidence: 99%