Abstract. This paper presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-D cameras suffers from undesired artifacts such as edge inaccuracies or holes due to occlusions or low object remission. In this work, we use recent depth enhancement filters intended for Time-of-Flight cameras, and extend them to structured light based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding 2-D image, we correct the depth measurements by separately treating its undesired regions. To that end, we propose specific confidence maps to tackle areas in the scene that require a special treatment. Furthermore, in the case of filtering artifacts, we introduce the use of RGB images as guidance images as an alternative to real-time state-of-the-art fusion filters that use grayscale guidance images. Our experimental results show that the proposed fusion filter provides dense depth maps with corrected erroneous or invalid depth measurements and adjusted depth edges. In addition, we propose a mathematical formulation that enables to use the filter in real-time applications.