We report here our results in a multi-sensor setup reproducing the conditions of an automated focused ultrasound surgery environment. The aim is to continuously predict the position of an internal organ (here the liver) under guided and non-guided free breathing, with the accuracy required by surgery. We have performed experiments with 16 healthy human subjects, two of those taking part in full-scale experiments involving a 3 Tesla MRI machine recording a volume containing the liver. For the other 14 subjects we have used the optical tracker as a surrogate target. All subjects where volunteers who agreed to participate in the experiments after being thoroughly informed about it. For the MRI sessions we have analyzed semi-automatically offline the images in order to obtain the ground truth, the true position of the selected feature of the liver. The results we have obtained with continuously updated random forest models are very promising, we have obtained good prediction-target correlation coefficients for the surrogate targets (0.71 ± 0.1) and excellent for the real targets in the MRI experiments (over 0.91), despite being limited to a lower model update frequency, once every 6.16 seconds
Over the last decades, minimally invasive surgery (MIS) has become more and more important, although surgeons still have to deal with limited orientation and difficult navigation. The 2 D camera image of the endoscope provides the only visual feedback. However, the field of view is narrow and the surgeon cannot control the viewing direction himself because the endoscope is usually held by an assistant. The "Endoguide" project, founded by the German Federal Ministry of Education and Research, aims to develop a novel computer-assisted surgery (CAS) system for laparoscopic interventions, consisting of two major parts: a new type of endoscope with variable viewing direction and a processing unit for offering VR / AR support and intuitive user input paradigms. Instead of being held by an assistant, the new endoscope will be mounted and equipped with electric motors for controlling the viewing direction, focus and zoom of the camera. The CAS unit enables a multitude of advanced features. The entire system including the endoscope can be controlled through several touchless interfaces, such as speech recognition, head or gaze tracking. Moreover, real-time GPU-based video processing and optical tracking facilitate several automatic features that range from a FFT-based auto focus to image-based instrument tracking and adaptive region-of-interest selection. In particular the latter allows the surgeon to concentrate on his tasks by automatically providing the correct view. In order to overcome the endoscope's limited field of view, the system automatically captures and stitches 360 degrees panoramic overview scans, which can be viewed on a regular screen or in immersive environments like small dome projections. The current live-view from the endoscope is displayed as an inset at the correct position within the generated panoramic still image. Data and annotations from pre-surgical planning (e.g. from CT / MRI) can be overlaid and provide valuable information during the intervention. Tracking the head of the surgeon allows to directly couple the viewing direction inside the dome with the orientation of the endoscope and, hence, to provide a direct visual feedback
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.