2023
DOI: 10.1145/3570327
|View full text |Cite
|
Sign up to set email alerts
|

Deep Appearance Prefiltering

Abstract: Physically based rendering of complex scenes can be prohibitively costly with a potentially unbounded and uneven distribution of complexity across the rendered image. The goal of an ideal level of detail (LoD) method is to make rendering costs independent of the 3D scene complexity, while preserving the appearance of the scene. However, current prefiltering LoD methods are limited in the appearances they can support due to their reliance of approximate models and other heuristics. We pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 106 publications
0
1
0
Order By: Relevance
“…In our experiments, we found that such a volumetric representation results in fewer view-dependent artifacts than a scene vertex representation (see Table 6) or a learned neural texture (single-resolution hash grid), and is easier to implement and compress, since parameterization of geometry remains a difficult problem. Note that the rendering costs of volumetric methods are independent of the level of detail of the scene; this has been exploited in previous works involving neural scene-to-volume computation [BSK23]. Wm[k k k] is a two-dimensional grid that stores a feature vector for each wavelet.…”
Section: Mathematical Frameworkmentioning
confidence: 99%
“…In our experiments, we found that such a volumetric representation results in fewer view-dependent artifacts than a scene vertex representation (see Table 6) or a learned neural texture (single-resolution hash grid), and is easier to implement and compress, since parameterization of geometry remains a difficult problem. Note that the rendering costs of volumetric methods are independent of the level of detail of the scene; this has been exploited in previous works involving neural scene-to-volume computation [BSK23]. Wm[k k k] is a two-dimensional grid that stores a feature vector for each wavelet.…”
Section: Mathematical Frameworkmentioning
confidence: 99%