Intraoperative brain deformations decrease accuracy in image-guided neurosurgery. Approaches to quantify these deformations based on 3-D reconstruction of cortectomy surfaces have been described and have shown promising results regarding the extrapolation to the whole brain volume using additional prior knowledge or sparse volume modalities. Quantification of brain deformations from surface measurement requires the registration of surfaces at different times along the surgical procedure, with different challenges according to the patient and surgical step. In this paper, we propose a new flexible surface registration approach for any textured point cloud computed by stereoscopic or laser range approach. This method includes three terms: the first term is related to image intensities, the second to Euclidean distance, and the third to anatomical landmarks automatically extracted and continuously tracked in the 2-D video flow. Performance evaluation was performed on both phantom and clinical cases. The global method, including textured point cloud reconstruction, had accuracy within 2 mm, which is the usual rigid registration error of neuronavigation systems before deformations. Its main advantage is to consider all the available data, including the microscope video flow with higher temporal resolution than previously published methods.
Abstract-This paper presents a new method for displaying in the same 3D scene multimodal preoperative images of a patient and images of the operative field viewed through surgical microscope binoculars for image-guided neurosurgery.Matching real world information, i.e., the operative field, with virtual world information, i.e., preoperative images of the patient, is an important issue in image-guided neurosurgery. This can be achieved by superimposing preoperative images onto a surgical microscope ocular or a head-mounted display. Such an approach is usually called augmented reality (AR). When surgery is performed in functional areas, such as the eloquent cortex, multimodal images are required. Therefore, preoperative images consist of a complex 3D multimodal scene which can hamper the vision of the real world when displayed in the neurosurgeons view of the operative field.The approach, introduced in this paper, is called augmented virtuality (AV) and involves displaying the operative field view in the virtual world, i.e., in the 3D multimodal scene which includes preoperative images of the patient. Information from the operative field consists of a 3D surface reconstructed from two stereoscopic images from surgical microscope binoculars using stereovision methods. As the microscope is part of a neuronavigation system and is tracked by an optical 3D localizer, the 3D reconstructed surface is directly expressed in the physical space coordinate system. Using the image-to-physical space transformation computed by the neuronavigation system, this 3D surface can also be directly expressed in the image coordinate system.In this paper, we present the method for reconstructing 3D surfaces of the operative field from stereoscopic views and matching the reconstructed surface with the preoperative images. Performance evaluation of this method was performed using a physical skull phantom. 300 image pairs of this phantom were acquired. The distance between the reconstructed surfaces and the skull surface segmented from a CT data set of this phantom was used as a system accuracy measurement. Our method was used for 6 clinical cases with lesions in eloquent areas. For the minimum microscope focus value, 3D reconstruction accuracy alone was shown to be within 1mm (median: 0.76mm ± 0.27), whereas virtual and real image matching accuracy was shown to be within 3mm (median: 2.29mm ± 0.59), including the imageto-physical space registration error.Clinical use of this system has proved the relevance of our approach. In addition to seeing beyond the surface, augmented virtuality can be used to see around the surgical area. With this system, neurosurgeons and clinical staff in the OR were able to interact with the resulting 3D scene by rotating and modifying transparency features. This AV system facilitates understanding of the spatial relationship between the operative field and the complex 3D multimodal scene, which includes preoperative images of the patient. Index Terms-Preoperative and Intraoperative Multimodal
This paper gives an overview of the evolution of clinical neuroinformatics in the domain of neurosurgery. It shows how image guided neurosurgery (IGNS) is evolving according to the integration of new imaging modalities before, during and after the surgical procedure and how this acts as the premise of the Operative Room of the future. These different issues, as addressed by the VisAGeS INRIA/INSERM U746 research team (http://www.irisa.fr/visages), are presented and discussed in order to exhibit the benefits of an integrated work between physicians (radiologists, neurologists and neurosurgeons) and computer scientists to give adequate answers toward a more effective use of images in IGNS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.