Abstract-This paper presents a new method for displaying in the same 3D scene multimodal preoperative images of a patient and images of the operative field viewed through surgical microscope binoculars for image-guided neurosurgery.Matching real world information, i.e., the operative field, with virtual world information, i.e., preoperative images of the patient, is an important issue in image-guided neurosurgery. This can be achieved by superimposing preoperative images onto a surgical microscope ocular or a head-mounted display. Such an approach is usually called augmented reality (AR). When surgery is performed in functional areas, such as the eloquent cortex, multimodal images are required. Therefore, preoperative images consist of a complex 3D multimodal scene which can hamper the vision of the real world when displayed in the neurosurgeons view of the operative field.The approach, introduced in this paper, is called augmented virtuality (AV) and involves displaying the operative field view in the virtual world, i.e., in the 3D multimodal scene which includes preoperative images of the patient. Information from the operative field consists of a 3D surface reconstructed from two stereoscopic images from surgical microscope binoculars using stereovision methods. As the microscope is part of a neuronavigation system and is tracked by an optical 3D localizer, the 3D reconstructed surface is directly expressed in the physical space coordinate system. Using the image-to-physical space transformation computed by the neuronavigation system, this 3D surface can also be directly expressed in the image coordinate system.In this paper, we present the method for reconstructing 3D surfaces of the operative field from stereoscopic views and matching the reconstructed surface with the preoperative images. Performance evaluation of this method was performed using a physical skull phantom. 300 image pairs of this phantom were acquired. The distance between the reconstructed surfaces and the skull surface segmented from a CT data set of this phantom was used as a system accuracy measurement. Our method was used for 6 clinical cases with lesions in eloquent areas. For the minimum microscope focus value, 3D reconstruction accuracy alone was shown to be within 1mm (median: 0.76mm ± 0.27), whereas virtual and real image matching accuracy was shown to be within 3mm (median: 2.29mm ± 0.59), including the imageto-physical space registration error.Clinical use of this system has proved the relevance of our approach. In addition to seeing beyond the surface, augmented virtuality can be used to see around the surgical area. With this system, neurosurgeons and clinical staff in the OR were able to interact with the resulting 3D scene by rotating and modifying transparency features. This AV system facilitates understanding of the spatial relationship between the operative field and the complex 3D multimodal scene, which includes preoperative images of the patient.
Index Terms-Preoperative and Intraoperative Multimodal