7th International Conference on Image Formation in X-Ray Computed Tomography 2022
DOI: 10.1117/12.2646857
|View full text |Cite
|
Sign up to set email alerts
|

Context-aware, reference-free local motion metric for CBCT deformable motion compensation

Abstract: Deformable motion is one of the main challenges to image quality in interventional cone beam CT (CBCT). Autofocus methods have been successfully applied for deformable motion compensation in CBCT, using multi-region joint optimization approaches that leverage the moderately smooth spatial variation motion of the deformable motion field with a local neighborhood. However, conventional autofocus metrics enforce images featuring sharp image-appearance, but do not guarantee the preservation of anatomical structure… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
2

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(13 citation statements)
references
References 10 publications
0
11
2
Order By: Relevance
“…The deformable autofocus algorithm used in this work, illustrated in Figure 3, was based on previous work on deformable motion estimation with assumptions of local rigidity within small ROIs placed throughout the FOV. 11,47,48 The autofocus motion estimation method acts on a set of N R ROIs of arbitrary size and positioned at arbitrary locations ⃗ r n {n = 1, …, N R } within the CBCT volume. In line with previous approaches, we assume that the motion trajectory within any ROI can be considered rigid and modeled as a time-dependent vector, with three degrees-of -freedom, and with a temporal variation following a cubic b-spline basis with N t temporal knots.…”
Section: Deep Autofocus Motion Compensation With Vif DLmentioning
confidence: 99%
“…The deformable autofocus algorithm used in this work, illustrated in Figure 3, was based on previous work on deformable motion estimation with assumptions of local rigidity within small ROIs placed throughout the FOV. 11,47,48 The autofocus motion estimation method acts on a set of N R ROIs of arbitrary size and positioned at arbitrary locations ⃗ r n {n = 1, …, N R } within the CBCT volume. In line with previous approaches, we assume that the motion trajectory within any ROI can be considered rigid and modeled as a time-dependent vector, with three degrees-of -freedom, and with a temporal variation following a cubic b-spline basis with N t temporal knots.…”
Section: Deep Autofocus Motion Compensation With Vif DLmentioning
confidence: 99%
“…The performance on the 3D masking task was assessed with 3D DICE scores against the ground truth 3D positions of the target structures. To obtain synthetic motion corrupted images, we introduced to each volume of the test set a smooth, deformable, timevarying MVF with [4,6,8] mm maximum motion amplitude, positioned randomly in the upper abdomen (resembling clinically observed motion 1 ). The temporal trajectory was set as a phase-shifted sinusoid with frequency ranging from 1 to 1.5 cycles/scan.…”
Section: Training and Experimental Validationmentioning
confidence: 99%
“…Prior work towards deformable motion compensation includes image-based autofocus methods that optimize handcrafted sharpness metrics that are agnostic to the underlying anatomy 2 . Recent research on autofocus metrics yielded deep autofocus methods, based on learned-based image appearance models that combine measures of image sharpness and realism of anatomical image content 3,4 . While deep autofocus integrates anatomical knowledge via data-driven metrics, those methods aim at achieving uniform compensation across tissues, disregarding the imaging task (i.e., visualization of contrast-enhanced vascular anatomy in this work).…”
Section: Introductionmentioning
confidence: 99%
“…This often results in artifacts from patient motion, which, in the case of abdominal imaging, is caused by a combination of periodic and aperiodic sources into a complex deformable motion field with non-predictable, highly heterogeneous, temporal trajectories. Previous work [3,4] demonstrated the potential of image-based autofocus for motion compensation in interventional CBCT for non-periodic motion, via data-driven autofocus metrics based on deep convolutional neural networks (CNNs). Deep autofocus metrics were trained to reproduce measures of structural similarity (Visual Information Fidelity -VIF-in our case), removing the need for a motion-free reference, yielding a learned metric (DL-VIF) that effectively quantified the degree of motion contamination and shape distortion, as well as the anatomical realism of the image content.…”
Section: Introductionmentioning
confidence: 99%
“…Deep autofocus yielded robust performance in head rigid motion compensation [3]. Extension to deformable motion was achieved by incorporating context information into DL-VIF, with a context-aware deep CNN design, integrated into in a multi-region autofocus framework [4].…”
Section: Introductionmentioning
confidence: 99%