Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional calibration is not feasible, such as complex non-circular CBCT orbits and systems with irreproducible source-detector trajectory.
This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters and in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e., the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt angle that reduced quantum noise in the region of the stimulus by avoiding highly attenuating anatomical structures. The task-driven imaging framework offers a potentially valuable paradigm for prospective definition of acquisition and reconstruction protocols that improve task performance without increase in dose.
We apply the methodology detailed in "Task-driven source-detector trajectories in cone-beam computed tomography: I. Theory and methods" by Stayman et al. for task-driven optimization of source-detector orbits in cone-beam computed tomography (CBCT) to scenarios emulating imaging tasks in interventional neuroradiology. The task-driven imaging framework is used to optimize the CBCT source-detector trajectory by maximizing the detectability index, d 0. The approach was applied to simulated cases of endovascular embolization of an aneurysm and arteriovenous malformation and was translated to real data first using a CBCT test bench followed by implementation on an interventional robotic C-arm. Task-driven trajectories were found to generally favor higher fidelity (i.e., less noisy) views, with an average increase in d 0 ranging from 7% to 28%. Visually, this resulted in improved conspicuity of particular stimuli by reducing the noise and altering the noise correlation to a form distinct from the spatial frequencies associated with the imaging task. The improvements in detectability and the demonstration of the task-driven workflow using a real interventional imaging system show the potential of the task-driven imaging framework to improve imaging performance on motorized, multiaxis C-arms in neuroradiology.
Image-guided therapies in the abdomen and pelvis are often hindered by motion artifacts in cone-beam CT (CBCT) arising from complex, non-periodic, deformable organ motion during long scan times (5–30 s). We propose a deformable image-based motion compensation method to address these challenges and improve CBCT guidance. Motion compensation is achieved by selecting a set of small regions of interest in the uncompensated image to minimize a cost function consisting of an autofocus objective and spatiotemporal regularization penalties. Motion trajectories are estimated using an iterative optimization algorithm (CMA-ES) and used to interpolate a 4D spatiotemporal motion vector field. The motion-compensated image is reconstructed using a modified filtered backprojection approach. Being image-based, the method does not require additional input besides the raw CBCT projection data and system geometry that are used for image reconstruction. Experimental studies investigated: (1) various autofocus objective functions, analyzed using a digital phantom with a range of sinusoidal motion magnitude (4, 8, 12, 16, 20 mm); (2) spatiotemporal regularization, studied using a CT dataset from The Cancer Imaging Archive with deformable sinusoidal motion of variable magnitude (10, 15, 20, 25 mm); and (3) performance in complex anatomy, evaluated in cadavers undergoing simple and complex motion imaged on a CBCT-capable mobile C-arm system (Cios Spin 3D, Siemens Healthineers, Forchheim, Germany). Gradient entropy was found to be the best autofocus objective for soft-tissue CBCT, increasing structural similarity (SSIM) by 42%–92% over the range of motion magnitudes investigated. The optimal temporal regularization strength was found to vary widely (0.5–5 mm−2) over the range of motion magnitudes investigated, whereas optimal spatial regularization strength was relatively constant (0.1). In cadaver studies, deformable motion compensation was shown to improve local SSIM by ∼17% for simple motion and ∼21% for complex motion and provided strong visual improvement of motion artifacts (reduction of blurring and streaks and improved visibility of soft-tissue edges). The studies demonstrate the robustness of deformable motion compensation to a range of motion magnitudes, frequencies, and other factors (e.g. truncation and scatter).
Purpose: Patient motion artifacts present a prevalent challenge to image quality in interventional cone-beam CT (CBCT). We propose a novel reference-free similarity metric (DL-VIF) that leverages the capability of deep convolutional neural networks (CNN) to learn features associated with motion artifacts within realistic anatomical features. DL-VIF aims to address shortcomings of conventional metrics of motion-induced image quality degradation that favor characteristics associated with motion-free images, such as sharpness or piecewise constancy, but lack any awareness of the underlying anatomy, potentially promoting images depicting unrealistic image content. DL-VIF was integrated in an autofocus motion compensation framework to test its performance for motion estimation in interventional CBCT. Methods: DL-VIF is a reference-free surrogate for the previously reported Visual Image Fidelity (VIF) metric, computed against a motion-free reference, generated using a CNN trained using simulated motion-corrupted and motion-free CBCT data. Relatively shallow (2-ResBlock) and deep (3-Resblock) CNN architectures were trained and tested to assess sensitivity to motion artifacts and generalizability to unseen anatomy and motion patterns. DL-VIF was integrated into an autofocus framework for rigid motion compensation in head/brain CBCT and assessed in simulation and cadaver studies in comparison to a conventional gradient entropy metric. Results: The 2-ResBlock architecture better reflected motion severity and extrapolated to unseen data, whereas 3-ResBlock was found more susceptible to overfitting, limiting its generalizability to unseen scenarios. DL-VIF outperformed gradient entropy in simulation studies (yielding average multi-resolution structural similarity index (SSIM) improvement over uncompensated image of 0.054 and 0.029, respectively, referenced to motion-free images. DL-VIF was also more robust in motion compensation, evidenced by reduced variance in SSIM for various motion patterns (σDL-VIF = 0.003 vs σgradient entropy= 0.016). Similarly, in cadaver studies, DL-VIF demonstrated superior motion compensation compared to gradient entropy (an average SSIM improvement of 0.043 (5%) vs. little improvement and even degradation in SSIM, respectively) and visually improved image quality even in severely motion-corrupted images. Conclusion: The studies demonstrated the feasibility of building reference-free similarity metrics for quantification of motion-induced image quality degradation and distortion of anatomical structures in CBCT. DL-VIF provides a reliable surrogate for motion severity, penalizes unrealistic distortions, and presents a valuable new objective function for autofocus motion compensation in CBCT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.