Abstract-Mobile C-arm is an essential tool in everyday trauma and orthopedics surgery. Minimally invasive solutions, based on X-ray imaging and coregistered external navigation created a lot of interest within the surgical community and started to replace the traditional open surgery for many procedures. These solutions usually increase the accuracy and reduce the trauma. In general, they introduce new hardware into the OR and add the line of sight constraints imposed by optical tracking systems. They thus impose radical changes to the surgical setup and overall procedure. We augment a commonly used mobile C-arm with a standard video camera and a double mirror system allowing real-time fusion of optical and X-ray images. The video camera is mounted such that its optical center virtually coincides with the C-arm's X-ray source. After a one-time calibration routine, the acquired X-ray and optical images are coregistered. This paper describes the design of such a system, quantifies its technical accuracy, and provides a qualitative proof of its efficiency through cadaver studies conducted by trauma surgeons. In particular, it studies the relevance of this system for surgical navigation within pedicle screw placement, vertebroplasty, and intramedullary nail locking procedures. The image overlay provides an intuitive interface for surgical guidance with an accuracy of 1 mm, ideally with the use of only one single X-ray image. The new system is smoothly integrated into the clinical application with no additional hardware especially for down-the-beam instrument guidance based on the anteroposterior oblique view, where the instrument axis is aligned with the X-ray source. Throughout all experiments, the camera augmented mobile C-arm system proved to be an intuitive and robust guidance solution for selected clinical routines.
The need to improve medical diagnosis and reduce invasive surgery is dependent upon seeing into a living human system. The use of diverse types of medical imaging and endoscopic instruments has provided significant breakthroughs, but not without limiting the surgeon's natural, intuitive and direct 3D perception into the human body. This paper presents a method for the use of Augmented Reality (AR) for the convergence of improved perception of 3D medical imaging data (mimesis) in context to the patient's own anatomy (in-situ) incorporating the physician's intuitive multisensory interaction and integrating direct manipulation with endoscopic instruments. Transparency of the video images recorded by the color cameras of a video see-through, stereoscopic HeadMounted-Display (HMD) is adjusted according to the position and line of sight of the observer, the shape of the patient's skin and the location of the instrument. The modified video image of the real scene is then blended with the previously rendered virtual anatomy. The effectiveness has been demonstrated in a series of experiments at the Chirurgische Klinik in Munich, Germany with cadaver and in-vivo studies. The results can be applied for designing medical AR training and educational applications.
In recent years, an increasing number of liver tumor indications were treated by minimally invasive laparoscopic resection. Besides the restricted view, two major intraoperative issues in laparoscopic liver resection are the optimal planning of ports as well as the enhanced visualization of (hidden) vessels, which supply the tumorous liver segment and thus need to be divided (e.g., clipped) prior to the resection. We propose an intuitive and precise method to plan the placement of ports. Preoperatively, self-adhesive fiducials are affixed to the patient's skin and a computed tomography (CT) data set is acquired while contrasting the liver vessels. Immediately prior to the intervention, the laparoscope is moved around these fiducials, which are automatically reconstructed to register the patient to its preoperative imaging data set. This enables the simulation of a camera flight through the patient's interior along the laparoscope's or instruments' axes to easily validate potential ports. Intraoperatively, surgeons need to update their surgical planning based on actual patient data after organ deformations mainly caused by application of carbon dioxide pneumoperitoneum. Therefore, preoperative imaging data can hardly be used. Instead, we propose to use an optically tracked mobile C-arm providing cone-beam CT imaging capability intraoperatively. After patient positioning, port placement, and carbon dioxide insufflation, the liver vessels are contrasted and a 3-D volume is reconstructed during patient exhalation. Without any further need for patient registration, the reconstructed volume can be directly augmented on the live laparoscope video, since prior calibration enables both the volume and the laparoscope to be positioned and oriented in the tracking coordinate frame. The augmentation provides the surgeon with advanced visual aid for the localization of veins, arteries, and bile ducts to be divided or sealed.
Abstract. The idea of in-situ visualization for surgical procedures has been widely discussed in the community [1,2,3,4]. While the tracking technology offers nowadays a sufficient accuracy and visualization devices have been developed that fit seamlessly into the operational workflow [1,3], one crucial problem remains, which has been discussed already in the first paper on medical augmented reality [4]. Even though the data is presented at the correct place, the physician often perceives the spatial position of the visualization to be closer or further because of virtual/real overlay. This paper describes and evaluates novel visualization techniques that are designed to overcome misleading depth perception of trivially superimposed virtual images on the real view. We have invited 20 surgeons to evaluate seven different visualization techniques using a head mounted display (HMD). The evaluation has been divided into two parts. In the first part, the depth perception of each kind of visualization is evaluated quantitatively. In the second part, the visualizations are evaluated qualitatively in regard to user friendliness and intuitiveness. This evaluation with a relevant number of surgeons using a state-of-the-art system is meant to guide future research and development on medical augmented reality.
Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.