We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with `success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99 993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Surgical targeting of the incorrect vertebral level (“wrong-level” surgery) is among the more common wrong-site surgical errors, attributed primarily to a lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. Conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error, and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (viz., CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved 10 patient CT datasets from which 50,000 simulated fluoroscopic images were generated from C-arm poses selected to approximate C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (viz., mPD < 5mm). Simulation studies showed a success rate of 99.998% (1 failure in 50,000 trials) and computation time of 4.7 sec on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.
The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (~40–80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4–2.2 × over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ~1.7 mGy and benefits from 50% sparsity at dose below ~1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.