Modern supercomputers enable increasingly large N-body simulations using unstructured point data. The structures implied by these points can be reconstructed implicitly. Direct volume rendering of radial basis function (RBF) kernels in domain-space offers flexible classification and robust feature reconstruction, but achieving performant RBF volume rendering remains a challenge for existing methods on both CPUs and accelerators. In this paper, we present a fast CPU method for direct volume rendering of particle data with RBF kernels. We propose a novel two-pass algorithm: first sampling the RBF field using coherent bounding hierarchy traversal, then subsequently integrating samples along ray segments. Our approach performs interactively for a range of data sets from molecular dynamics and astrophysics up to 82 million particles. It does not rely on level of detail or subsampling, and offers better reconstruction quality than structured volume rendering of the same data, exhibiting comparable performance and requiring no additional preprocessing or memory footprint other than the BVH. Lastly, our technique enables multi-field, multi-material classification of particle data, providing better insight and analysis.
IntroductionDirect volume rendering (DVR) is an increasingly popular modality for visualizing 3D scalar fields in scientific data. It reconstructs, classifies and shades any continuous scalar field, enabling better insight than surface-based visualization in many applications. Volume rendering of structured data is now commonplace, and optimized methods have been developed for unstructured mesh and finite-element data. Generally, these methods have been implemented on GPUs, due to their high computational throughput and built-in hardware texture sampling features. However, volume rendering directly from unstructured point data remains a challenge. N-body codes in particular produce large quantities of data. For example, large molecular dynamics simulations can generate megabytes-to-gigabytes per time step and tens-tohundreds of thousands of time steps; large astrophysics simulations can generate terabytes to petabytes per timestep. At scale, post-processing and moving such data is prohibitive. Resampling particle data into a structured volume costs memory and computational time, as well as sacrificing information and visual quality (e.g., Figure 1). Computing isosurfaces is similarly costly, and prevents interactive classification and analysis of the original scalar fields. These factors motivate in transit and in situ visualization on high performance computing (HPC) resources, minimal