Surgical targeting of the incorrect vertebral level (“wrong-level” surgery) is among the more common wrong-site surgical errors, attributed primarily to a lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. Conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error, and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (viz., CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved 10 patient CT datasets from which 50,000 simulated fluoroscopic images were generated from C-arm poses selected to approximate C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (viz., mPD < 5mm). Simulation studies showed a success rate of 99.998% (1 failure in 50,000 trials) and computation time of 4.7 sec on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.
The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.
We develop a mathematical framework for the design of orbital trajectories that are optimal to a particular imaging task (or tasks) in advanced cone-beam computed tomography systems that have the capability of general source-detector positioning. The framework allows various parameterizations of the orbit as well as constraints based on imaging system capabilities. To accommodate nonstandard system geometries, a modelbased iterative reconstruction method is applied. Such algorithms generally complicate the assessment and prediction of reconstructed image properties; however, we leverage efficient implementations of analytical predictors of local noise and spatial resolution that incorporate dependencies of the reconstruction algorithm on patient anatomy, x-ray technique, and geometry. These image property predictors serve as inputs to a taskbased performance metric defined by detectability index, which is optimized with respect to the orbital parameters of data acquisition. We investigate the framework of the task-driven trajectory design in several examples to examine the dependence of optimal source-detector trajectories on the imaging task (or tasks), including location and spatial-frequency dependence. A variety of multitask objectives are also investigated, and the advantages to imaging performance are quantified in simulation studies.
Purpose: Phantoms are a basic tool for assessing and verifying performance in CT research and clinical practice. Patient-based realistic lung phantoms accurately representing textures and densities are essential in developing and evaluating novel CT hardware and software. This study introduces PixelPrint, a 3D printing solution to create patient-based lung phantoms with accurate attenuation profiles and textures. Methods: PixelPrint, a software tool, was developed to convert patient digital imaging and communications in medicine (DICOM) images directly into FDM printer instructions (G-code). Density was modeled as the ratio of filament to voxel volume to emulate attenuation profiles for each voxel, with the filament ratio controlled through continuous modification of the printing speed. A calibration phantom was designed to determine the mapping between filament line width and Hounsfield units (HU) within the range of human lungs. For evaluation of PixelPrint, a phantom based on a single human lung slice was manufactured and scanned with the same CT scanner and protocol used for the patient scan. Density and geometrical accuracy between phantom and patient CT data were evaluated for various anatomical features in the lung. Results: For the calibration phantom, measured mean HU show a very high level of linear correlation with respect to the utilized filament line widths, (r > 0.999). Qualitatively, the CT image of the patient-based phantom closely resembles the original CT image both in texture and contrast levels (from −800 to 0 HU), with clearly visible vascular and parenchymal structures. Regions of interest comparing attenuation illustrated differences below 15 HU. Manual size measurements performed by an experienced thoracic radiologist reveal a high degree of geometrical correlation of details between identical patient and phantom features, with differences smaller than the intrinsic spatial resolution of the scans. Conclusion:The present study demonstrates the feasibility of 3D-printed patient-based lung phantoms with accurate organ geometry, image texture, and attenuation profiles.PixelPrint will enable applications in the research and development of CT technology, including further development in radiomics.
Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate "ejection" of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.