In this paper, we introduce a novel technique for pre-filtering multi-layer shadow maps. The occluders in the scene are stored as variable-length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre-filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre-filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real-time even for complex scenes and difficult setups. Our results demonstrate that our pre-filtered maps are general and particularly scalable.
Figure 1: Example subdivision surface scenes rendered with diffuse path tracing (up to 8 bounces, 7 − 12 secondary rays/primary ray).Right: the Courtyard scene (66K patches after feature-adaptive subdivision) is adaptively-tessellated into 1.4M triangles from scratch per frame, and ray traced with over 90M rays per second (including shading) on a high-end Intel R Xeon R processor system using our efficient lazy-build caching scheme. Left: four Barbarians embedded in the Sponza Atrium scene (426K patches) and adaptively-tessellated into 11M triangles are ray traced with 40M rays per second. A 60MB lazy-build cache allows for rendering this scene with over 91% of the performance of an unbounded memory cache. Compared to ray tracing a pre-tessellated version, the memory consumption is reduced by 6 − 7×.
In this paper we present a scattering-based method to compute high quality depth of field in real time. Relying on multiple layers of scene data, our method naturally supports settings with partial occlusion, an important effect that is often disregarded by real time approaches. Using well-founded layer-reduction techniques and efficient mapping to the GPU, our approach out-performs established approaches with a similar high-quality feature set. Our proposed algorithm works by collecting a multi-layer image, which is then directly reduced to only keep hidden fragments close to discontinuities. Fragments are then further reduced by merging and then splatted to screen-space tiles. The per-tile information is then sorted and accumulated in order, yielding an overall approach that supports partial occlusion as well as properly ordered blending of the out-of-focus fragments.
Figure 1: Our novel approach to rendering depth of field in real-time provides pleasant and plausible results (left), also for complicated cases where out-of-focus geometry in the near-field would occlude important scene geometry (right). AbstractWe present a novel technique for rendering depth of field that addresses difficult overlap cases, such as close, but out-of-focus, geometry in the near-field. Such scene configurations are not managed well by state-of-the-art post-processing approaches since essential information is missing due to occlusion.Our proposed algorithm renders the scene from a single camera position and computes a layered image using a single pass by constructing per-pixel lists. These lists can be filtered progressively to generate differently blurred representations of the scene. We show how this structure can be exploited to generate depth of field in real-time, even in complicated scene constellations.
Rendering in real time for virtual reality headsets with high user immersion is challenging due to strict framerate constraints as well as due to a low tolerance for artefacts. Eye tracking‐based foveated rendering presents an opportunity to strongly increase performance without loss of perceived visual quality. To this end, we propose a novel foveated rendering method for virtual reality headsets with integrated eye tracking hardware. Our method comprises recycling pixels in the periphery by spatio‐temporally reprojecting them from previous frames. Artefacts and disocclusions caused by this reprojection are detected and re‐evaluated according to a confidence value that is determined by a newly introduced formalized perception‐based metric, referred to as confidence function. The foveal region, as well as areas with low confidence values, are redrawn efficiently, as the confidence value allows for the delicate regulation of hierarchical geometry and pixel culling. Hence, the average primitive processing and shading costs are lowered dramatically. Evaluated against regular rendering as well as established foveated rendering methods, our approach shows increased performance in both cases. Furthermore, our method is not restricted to static scenes and provides an acceleration structure for post‐processing passes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.