Abstract. Robotic assistance in minimally invasive surgical interventions has gained substantial popularity over the past decade. Surgeons perform such operations by remotely manipulating laparoscopic tools whose motion is executed by the surgical robot. One of the main tools deployed is an endoscopic binocular camera that provides stereoscopic vision of the operated scene. Such surgeries have notably garnered wide interest in renal surgeries such as partial nephrectomy, which is the focus of our work. This operation consists of the localization and removal of tumorous tissue in the kidney. During this procedure, the surgeon would greatly benefit from an augmented reality view that would display additional information from the different imaging modalities available, such as pre-operational CT and intra-operational ultrasound. In order to fuse and visualize these complementary data inputs in a pertinent way, they need to be accurately registered to a 3D reconstruction of the imaged surgical scene topology captured by the binocular camera. In this paper we propose a simple yet powerful approach for dense matching between the two stereoscopic camera views and for reconstruction of the 3D scene. Our method adaptively and accurately finds the optimal correspondence between each pair of images according to three strict confidence criteria that efficiently discard the majority of outliers. Using experiments on clinical in-vivo stereo data, including comparisons to two state-of-the-art 3D reconstruction techniques in minimally invasive surgery, our results illustrate superior robustness and better suitability of our approach to realistic surgical applications.