2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00254
|View full text |Cite
|
Sign up to set email alerts
|

DeepVoxels: Learning Persistent 3D Feature Embeddings

Abstract: In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis. To this end, we propose DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry. At its core, our approach is based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying 3D scene structure. Our approach combines insights from 3D geome… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
395
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 504 publications
(395 citation statements)
references
References 47 publications
0
395
0
Order By: Relevance
“…Numerous recent works employed DR for learning based 3D vision tasks, such as single view image reconstruction [Pontes et al 2017;Vogels et al 2018;Yan et al 2016;, face reconstruction [Richardson et al 2017], shape completion [Hu et al 2019], and image synthesis [Sitzmann et al 2018]. To describe a few, Pix2Scene [Rajeswar et al 2018] uses a point based DR to learn implicit 3D representations from images.…”
Section: Differentiable Renderingmentioning
confidence: 99%
“…Numerous recent works employed DR for learning based 3D vision tasks, such as single view image reconstruction [Pontes et al 2017;Vogels et al 2018;Yan et al 2016;, face reconstruction [Richardson et al 2017], shape completion [Hu et al 2019], and image synthesis [Sitzmann et al 2018]. To describe a few, Pix2Scene [Rajeswar et al 2018] uses a point based DR to learn implicit 3D representations from images.…”
Section: Differentiable Renderingmentioning
confidence: 99%
“…Novel view synthesis is typically solved using image based rendering techniques [Kang et al 2006], with recent approaches allowing for high-quality view synthesis results [Chaurasia et al 2013[Chaurasia et al , 2011Hedman et al 2017;Hedman and Kopf 2018;Penner and Zhang 2017]. With the emergence of deep neural networks, learning-based techniques have become an increasingly popular tool for novel view synthesis [Flynn et al 2016;Ji et al 2017;Kalantari et al 2016;Meshry et al 2019;Mildenhall et al 2019;Sitzmann et al 2019;Thies et al , 2018Zhou et al 2018]. To enable high-quality synthesis results, existing methods typically require multiple input views [Kang et al 2006;Penner and Zhang 2017].…”
Section: Related Work 21 Novel View Synthesismentioning
confidence: 99%
“…DeepVoxels [Sitzmann et al 2018] automatically learns a 3D feature representation for novel view synthesis but is limited to static objects and scenes, and the employed architecture does not lend itself to real-time inference. Martin-Brualla et al [2018] use neural networks to fill holes and generally improve the quality of a textured geometric representation.…”
Section: Neural Renderingmentioning
confidence: 99%