Volumetric rendering is widely used to examine 3D scalar fields from CT/MRI scanners and numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide perceptual cues to aid in understanding structure contained in the data. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre)computation. In this paper, a shading model for interactive direct volume rendering is proposed that provides perceptual cues similar to those of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. The method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions while modifications to the volume via clipping planes are incorporated into the resulting occlusion-based shading.
a) 6.1 FPS (b) 3.8 FPS (c) 3.8 FPS (d) 3.8 FPS Figure 1: The backpack data set with 512 × 512 × 373 voxels rendered a) with Phong shading only; and b) with depth of field with α = 30.9 • focused on the spray can in the foreground, c) on the wires behind it and d) on the boxes with the other spray can in the background. In each image 1469 slices were taken and gradients were estimated on the fly. AbstractIn this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions.
Existing techniques for rendering arbitrary-form implicit surfaces are limited, either in performance, correctness or flexibility. Ray tracing algorithms employing interval arithmetic (IA) or affine arithmetic (AA) for root-finding are robust and general in the class of surfaces they support, but traditionally slow. Nonetheless, implemented efficiently using a stack-driven iterative algorithm and SIMD vector instructions, these methods can achieve interactive performance for common algebraic surfaces on the CPU. A similar algorithm can also be implemented stacklessly, allowing for efficient ray tracing on the GPU. This paper presents these algorithms, as well as an inclusion-preserving reduced affine arithmetic (RAA) for faster ray-surface intersection. Shader metaprogramming allows for immediate and automatic generation of symbolic expressions and their interval or affine extensions. Moreover, we are able to render even complex forms robustly, in real-time at high resolution.
Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C0 transfer functions.
In this paper, we present a user study on the use of Depth of Field for depth perception in Direct Volume Rendering. Direct Volume Rendering with Phong shading and perspective projection is used as the baseline. Depth of Field is then added to see its impact on the correct perception of ordinal depth. Accuracy and response time are used as the metrics to evaluate the usefulness of Depth of Field. The on site user study has two parts: static and dynamic. Eye tracking is used to monitor the gaze of the subjects. From our results we see that though Depth of Field does not act as a proper depth cue in all conditions, it can be used to reinforce the perception of which feature is in front of the other. The best results (high accuracy & fast response time) for correct perception of ordinal depth is when the front feature (out of the users were to choose from) is in focus and perspective projection is used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.