For seamless integration of virtual content into real scenes, realizing mutual global lighting effects between both worlds belongs to the most important and challenging goals. Therefore, plenty of global illumination approaches exist, which mostly share the same restriction: the real scene is approximated by a static model, which was built in advance and thus has to remain static. In our paper, we propose an image-space global illumination approach, based on reflective shadow maps, combined with the use of an RGB-D camera, to simulate first bounce diffuse indirect illumination without any pre-computations. Our approach supports indirect illumination in both directions (real to virtual and vice versa) and runs in real-time. Furthermore, it does not require advanced shader properties, since we developed an implementation making efficient usage of the Z-Buffer algorithm for calculating indirect illumination.
INTRODUCTIONDue to the fast development of modern graphic cards, real-time global illumination (GI) became a vast area of research. Indirect illumination effects, which were previously seen in offline rendering only, become more and more applicable to real-time applications. Since many real-time GI approaches require additional calculations on the scene in advance, they typically restrict dynamic changes to rigid body motion or approximate indirect illumination for low frequency behavior only. Both approaches are not suitable for mixed reality applications, where the captured real scene typically introduces all kinds of dynamic movements to be instantly considered. This is somehow annoying for mixed reality applications, as due to the blending of virtual and real content, global illumination effects become more important to provide seamless integration of virtual content (see Figure 1). Nevertheless, global illumination approaches exist, which are real-time capable and also account for high frequency indirect light behavior. Dachsbacher's Reflective Shadow Maps (RSM) [1] represent such an approach. As it is robust, it provides a foundation for several recent GI algorithms [2][3]. The performance of indirect lighting with RSM can be much improved if the lighting calculation is restricted to the visible part of the scene only [4], which can be achieved by screen-space deferred lighting. This restriction to screen-space calculations allows for new opportunities in the area of mixed reality. Employing a RGB-D camera allows us to obtain a coarse spatial representation of the visible scene which then may be used for GI calculations. For the refinement of the coarse spatial representation we propose guided image filtering [5] where the RGB-Image serves as guidance image to improve the depth image. During our research we observed that indirect illumination with RSM provides acceptable results to mixed reality applications, if the number of virtual point lights (VPL) introducing indirect light based on the RSM, is very high (4-8k). For such high VPL counts the fillrate becomes the bottleneck of an implementation. Thus, o...