2022
DOI: 10.1145/3522735
|View full text |Cite
|
Sign up to set email alerts
|

Active Exploration for Neural Global Illumination of Variable Scenes

Abstract: Neural rendering algorithms introduce a fundamentally new approach for photorealistic rendering, typically by learning a neural representation of illumination on large numbers of ground truth images. When training for a given variable scene, i.e., changing objects, materials, lights and viewpoint, the space \(\mathcal {D} \)  of possible training data instances quickly becomes unmanageable as the dimensions of variable parameters incre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 53 publications
0
11
0
Order By: Relevance
“…Our method is unique in the sense that it represents the dynamic light field in an explicit spatio‐temporal cache. We compare it to the two most closely related neural rendering methods, Active Exploration [DPD22] and a NeRF‐style rendering method TiNeuVox [FYW*22]. Both methods are designed with dynamic scenes in mind, making them ideal for comparison.…”
Section: Resultsmentioning
confidence: 99%
“…Our method is unique in the sense that it represents the dynamic light field in an explicit spatio‐temporal cache. We compare it to the two most closely related neural rendering methods, Active Exploration [DPD22] and a NeRF‐style rendering method TiNeuVox [FYW*22]. Both methods are designed with dynamic scenes in mind, making them ideal for comparison.…”
Section: Resultsmentioning
confidence: 99%
“…Adaptivity The sampling pattern is not on‐line [VKŠ*14] or adaptive [DPD22], but the same for each BRDF. A pattern that adapts to some other condition, or maybe to the outcome of previous samples, would be a relevant avenue of future work.…”
Section: Discussionmentioning
confidence: 99%
“…Bias Moreover, getting the probability of a sample is important in tasks where we want to retain unbiased estimates, such as in Monte Carlo rendering. Note that while we sample unevenly, we do not attempt to divide by the probability density to produce unbiased estimates of gradients, as it is not clear whether a biased gradient estimate can ultimately not be better than an unbiased one [DPD22]. What matters more is that the outer meta-optimizer sees the effect of those gradients and can factor it into the optimization by changing the initialization or step sizes.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, experiments showed that even if this tensor were subsampled at multiple views, the resulting convergence of the network was inadequate to getting good results on novel‐view specularities. In the future, a more adaptive active exploration approach may be helpful to increase the training time spent on hard‐to‐learn examples and prevent overfitting on the diffuse parts of the scene [DPD22].…”
Section: Implementation and Algorithmmentioning
confidence: 99%