2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00708
|View full text |Cite
|
Sign up to set email alerts
|

Neural Illumination: Lighting Prediction for Indoor Environments

Abstract: This paper addresses the task of estimating the light arriving from all directions to a 3D point observed at a selected pixel in an RGB image. This task is challenging because it requires predicting a mapping from a partial scene observation by a camera to a complete illumination map for a selected position, which depends on the 3D location of the selection, the distribution of unobserved light sources, the occlusions caused by scene geometry, etc. Previous methods attempt to learn this complex mapping directl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
149
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 115 publications
(149 citation statements)
references
References 28 publications
0
149
0
Order By: Relevance
“…Legendre et al extend this work to mobile applications and obtain better results by using a collection of videos as training data [14]. Song et al [23] use Matter-port3D [5] dataset and a novel warping procedure in order to support multiple insertion points. We improve on their work by training an end-to-end neural network to predict discrete parametric 3D lights with 3D position, area, color and intensity.…”
Section: Related Workmentioning
confidence: 99%
“…Legendre et al extend this work to mobile applications and obtain better results by using a collection of videos as training data [14]. Song et al [23] use Matter-port3D [5] dataset and a novel warping procedure in order to support multiple insertion points. We improve on their work by training an end-to-end neural network to predict discrete parametric 3D lights with 3D position, area, color and intensity.…”
Section: Related Workmentioning
confidence: 99%
“…The reconstruction can be performed by photographing light probes [11,12], labeling lights interactively [11], or be automated with an optimization process [13][14][15]. Deep neural networks can also learn relevant information from photographs, including lighting [16][17][18][19][20], geometry and albedo [21], or even SVBRDF [22][23][24][25]. Regardless of how the real scene is reconstructed, though, final compositing still comes to differential rendering [12,13,18,[26][27][28].…”
Section: Related Workmentioning
confidence: 99%
“…Spatially-variant lighting information can be traditionally extracted using physical probes [6,21], and more recently estimated with deep neural networks [7,8,28,33]. For example, Debevec et al demonstrated that spatially-variant lighting can be effectively estimated by using reflective sphere light probes to extrapolate camera views.…”
Section: Introductionmentioning
confidence: 99%
“…As we later observe that XiheNet models trained with LDR-based datasets lead to better visual effects than that of HDR-based datasets; for the remainder of the paper, we will report results using XiheNet trained on LDR-based datasets.Xihe outputs SH coefficients as an omnidirectional representation of environment lighting at a single world position for rendering. If directly using image-based lighting estimation models[8,28], one needs to post-process to correctly orient estimated SH coefficients since the 3D world orientation cannot be represented on the image input. Our XiheNet guarantees the orientation constant[33] by explicitly considers the world space point cloud and estimates SH coefficients at the same orientation.…”
mentioning
confidence: 99%