This work presents a sterile concept for 3D cameras within sterile surgical environments. In the digital operating room (OR), such cameras can serve as a valuable data source for cognitive workflow assistance systems, e.g decision or mechatronic support systems. One recent example are robotic assistants for instrument handling, such as the robotic scrub nurse currently developed in the framework of the SASHA-OR research project1. In this context, we detect laparoscopic instruments and the surgical environment with a 3D camera, whereby hygienic requirements need to be met. Using a Zivid Two sensor, we generated point clouds of the laparoscopic instruments located in an instrument holder and a drop zone. We compared the effect of using different pane types and thicknesses for the sterile camera enclosure and compared the performance with and without protective pane in terms of the point cloud accuracy. When analyzing multiple pane types, polymethyl methacrylate with 0.5 mm thickness (PMMA 0.5) provided the best results. At a scan distance of 560 mm to the surface center, which is required for the complete acquisition of a laparoscopic instrument, PMMA 0.5 achieved the smallest Chamfer distance (CD) values for both the scans with the laparoscopic instruments in the instrument holder (0.23 ± 1.52 mm) and in the drop zone (0.12 ± 0.25 mm).
Depth estimation from monocular images is an important task in localization and 3D reconstruction pipelines for bronchoscopic navigation. Various supervised and self-supervised deep learning-based approaches have proven themselves on this task for natural images. However, the lack of labeled data and the bronchial tissue's feature-scarce texture make the utilization of these methods ineffective on bronchoscopic scenes. In this work, we propose an alternative domain-adaptive approach. Our novel two-step structure first trains a depth estimation network with labeled synthetic images in a supervised manner; then adopts an unsupervised adversarial domain feature adaptation scheme to improve the performance on real images. The results of our experiments show that the proposed method improves the network's performance on real images by a considerable margin and can be employed in 3D reconstruction pipelines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.