Technical innovations in sensing and computation are quickly advancing the field of computer-integrated surgery. From one side, spectral imaging and biophoton- ics are strongly contributing to intraoperative diagnostics and decision making. Simultaneously, learning-based algorithms are reshaping the concept of assistance and prediction in surgery. In this fast-evolving panorama, we strongly believe there is still a need for robust geometric reconstruction of the surgical field whether the goal is traditional surgical assistance or partial or full autonomy. 3D reconstruction in surgery has been investigated almost only in the space of mono and stereoscopic visual imaging because surgeons always view the proce- dure through a clinical endoscope. Compared to using traditional computer vision, deep learning has made significant progress in creating high-quality 3D recon- structions and dense maps from such data streams, especially for monocular simultaneous localization and mapping (SLAM) [1]. The main limitations are linked to reliability, generalization, and computational cost. Meanwhile, lidar (light detection and ranging) has greatly expanded in use, especially in SLAM for robotics, terrestrial vehicles, and drones. Lidar sensors explicitly measure the depth field rather than inferring it from camera images. The technology is evolving quickly thanks to the upsurge of mixed and augmented reality in consumer mobile devices [2]: high-resolution, short- range, miniaturized lidar sensors are expected soon. In parallel to these developments, the concept of multiple-viewpoint surgical imaging was proposed in the early 2010’s in the context of magnetic actuation and micro-invasive surgery [3]. In routine clinical practice, however, the use of multiple trans-abdominal cannulae still limits the kinematics of the camera and instruments to have a fixed pivot point at the body wall. For this reason, here we propose an approach in which each surgical cannula can potentially hold a miniature lidar. We envision that exploring this powerful sensing tech- nology and enabling multi-viewpoint imaging without disrupting the current surgical workflow will yield far more accurate and complete 3D reconstructions of the surgical field, which opens new opportunities for the future of computer-integrated surgery.