Wireframe representation of a given model. middle: We voxelize the space around the model. One voxel on the lowest octree level is selected, based on the light position, and all potentially-silhouette (need to be tested) and silhouette edges (guaranteed to be silhouette) can be collected by ascending the octree hierarchy. right: Red coloured edges are those that are a part of the silhouette after testing the set of potentially silhouette edges (all red and black ones). Only a small subset of model edges need to be tested, which considerably reduces the computational complexity.
Figure 1: Leftmost: Streets of a city illuminated with 628 light sources with use of attenuation shown in Eq. (1). Second left: Illumination with use of attenuation shown in Eq.(2). Mean squared error is MSE = 0.033 for t att = 0.0297. Second right: The same, except for ambient lighting; MSE = 0.011. Rightmost: Illumination with use of attenuation shown in Eq. (4) and with use of ambient lighting; MSE = 0.0195. AbstractThis paper presents and investigates methods for fast and accurate illumination of scenes containing many light sources that have limited spatial influence, e.g. point light sources. For speeding up the computation, current graphics applications use an assumption that the light sources range can be limited using bounding spheres due to their limited spatial influence and illumination is computed only if a surface lies within the sphere. Therefore, we explore the differences in illumination between scenes illuminated with spatially limited light sources and physically more correct computation where the light radius is infinite. We show that the difference can be small if we add appropriate ambient lighting. The contribution of the paper is the method for fast estimation of ambient lighting in scenes illuminated by numerous light sources. We also propose a method for elimination of color discontinuities at the edges of the bounding spheres. Our solution is tested on two different scenes: a procedurally generated city and the Sibenik cathedral. Our approach allows for correct lighting computation in scenes with numerous light sources without any modification of the scene graph or other data structures or rendering procedures. It thus can be applied in various systems without any structural modifications.
This paper presents an in depth comparison of state‐of‐the‐art precise shadowing techniques for an omnidirectional point light. We chose several types of modern shadowing algorithms, starting from stencil shadow volumes, methods using traversal of acceleration structures to hardware‐accelerated ray‐traced shadows. Some methods were further improved – robustness, increased performance; we also provide the first multi‐platform implementations of some of the tested algorithms. All the methods are evaluated on several test scenes in different resolutions and on two hardware platforms – with and without dedicated hardware units for ray tracing. We conclude our findings based on speed and memory consumption. Ray‐tracing is the fastest and one of the easiest methods to implement with small memory footprint. The Omnidirectional Frustum‐Traced Shadows method has a predictable memory footprint and is the second fastest algorithm tested. Our stencil shadow volumes are faster than some newer algorithms. Per‐Triangle Shadow Volumes and Clustered Per‐Triangle Shadow Volumes are difficult to implement and require the most memory; the latter method scales well with the scene complexity and resolution. Deep Partitioned Shadow Volumes does not excel in any of the measured parameters and is suitable for smaller scenes. The source codes of the testing framework have been made publicly available.
Light field rendering is an image-based rendering method that does not use 3D models but only images of the scene as input to render new views. Light field approximation, represented as a set of images, suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene. Without information about depths in the scene, proper focusing of the light field scene is limited to a single focusing distance. The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes, based on statistical analysis of the pixel values contributing to the final image. Unlike existing techniques, this method does not need precomputed or acquired depth information. Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data, yielding visually satisfactory results. Experimental evaluation of the proposed method, implemented on a GPU, is presented in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.