Physicians are accustomed to using volumetric datasets for medical assessment, diagnosis and treatment. These modalities can be displayed with 3D computer visualizations for physicians to study the overall shape and internal anatomical structures. Gesturebased interfaces can be beneficial to interact with these kinds of visualizations in a variety of medical settings. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesturebased interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperformed the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the better interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization.
Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the “hand eye” calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p=0.003) but not for calibration (p=0.795).
Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.
We propose a novel method for the registration of 3D CT scans to 2D endoscopic images during the image-guided medialization laryngoplasty. This study aims to allow the surgeon to find the precise configuration of the implant and place it into the desired location by employing accurate registration methods of the 3D CT data to intra-operative patient and interactive visualization tools for the registered images. In this study, the proposed registration methods enable the surgeon to compare the outcome of the procedure to the pre-planned shape by matching the vocal folds in the CT rendered images to the endoscopic images. The 3D image fusion provides an interactive and intuitive guidance for surgeon by visualizing a combined and correlated relationship of the multiple imaging modalities. The 3D Magic Lens helps to effectively visualize laryngeal anatomical structures by applying different transparencies and transfer functions to the region of interest. The preliminary results of the study demonstrated that the proposed method can be readily extended for image-guided surgery of real patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.