Figure 1: Rendering results of (a) our method, relative error images of (b) our method and (c) Lightcuts [WFA * 05] in Sponza scene. Relative error threshold ε = 2% and confidence level α = 95% are specified. 92.96% pixels satisfy that the relative error is within 2% in our method, while only 43.67% pixels satisfy the condition in Lightcuts. In (d) Kitchen scene and (e) San Miguel scene, Cook-Torrance BRDFs and Ashikhmin-Shirely BRDFs are used, which cannot be used in Lightcuts. (f) and (g) show relative error images of (d) and (e), respectively.
AbstractThe popularity of many-light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many-light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.