Figure 1: Real-time results of our method for simulating translucent materials (skin on the left, ketchup on the right). Our separable subsurface-scattering method enables the generation of these images using only two convolutions (versus 12 in the sum-of-Gaussians approach [dLE07, JSG09]) and seven samples per pixel, while featuring quality comparable with the current state of the art, at a fraction of its cost. It can be implemented as a post-processing step and takes only 0.489 ms per frame on an AMD Radeon HD 7970 at 1080p, which makes it highly suitable for challenging real-time scenarios.
AbstractIn this paper we propose two real-time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 milliseconds per frame to execute. This makes them a practical option for realtime production scenarios. Current state-of-the-art, real-time approaches simulate subsurface light transport by approximating the radially symmetric non-separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to twelve) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low-rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high-quality diffusion simulation, while the second one offers an attractive trade-off between physical accuracy and artistic control. Both allow rendering subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance-sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post-processing steps without intrusive changes to existing rendering pipelines.