Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
BackgroundIn magnetic resonance image (MRI)‐guided radiotherapy (MRgRT), 2D rapid imaging is commonly used to track moving targets with high temporal frequency to minimize gating latency. However, anatomical motion is not constrained to 2D, and a portion of the target may be missed during treatment if 3D motion is not evaluated. While some MRgRT systems attempt to capture 3D motion by sequentially tracking motion in 2D orthogonal imaging planes, this approach assesses 3D motion via independent 2D measurements at alternating instances, lacking a simultaneous 3D motion assessment in both imaging planes.PurposeWe hypothesized that a motion model could be derived from prior 2D orthogonal imaging to estimate 3D motion in both planes simultaneously. We present a manifold learning technique to estimate 3D motion from 2D orthogonal imaging.MethodsFive healthy volunteers were scanned under an IRB‐approved protocol using a 3.0 T Siemens Skyra simulator. Images of the liver dome were acquired during free breathing (FB) with a 2.6 mm × 2.6 mm in‐plane resolution for approximately 10 min in alternating sagittal and coronal planes at ∼5 frames per second. The motion model was derived using a combined manifold learning and alignment approach based on locally linear embedding (LLE). The model utilized the spatially overlapping MRI signal shared by both imaging planes to group together images that had similar signals, enabling motion estimation in both planes simultaneously. The model's motion estimates were compared to the ground truth motion derived in each newly acquired image using deformable registration. A simulated target was defined on the dome of the liver and used to evaluate model performance. The Dice similarity coefficient and distance between the model‐tracked and image‐tracked contour centroids were evaluated. Motion modeling error was estimated in the orthogonal plane by back‐propagating the motion to the currently imaged plane and by interpolating the motion between image acquisitions where ground truth motion was available.ResultsThe motion observed in the healthy volunteer studies ranged from 12.6 to 38.7 mm. On average, the model demonstrated sub‐millimeter precision and > 0.95 Dice coefficient compared to the ground truth motion observed in the currently imaged plane. The average Dice coefficient and centroid distance between the model‐tracked and ground truth target contours were 0.96 ± 0.03 and 0.26 mm ± 0.27 mm respectively across all volunteer studies. The out‐of‐plane centroid motion error was estimated to be 0.85 mm ± 1.07 mm and 1.26 mm ± 1.38 mm using the back‐propagation (BP) and interpolation error estimation methods.ConclusionsThe healthy volunteer studies indicate promising results using the proposed motion modeling technique. Out‐of‐plane modeling error was estimated to be higher but still demonstrated sub‐voxel motion accuracy.
BackgroundIn magnetic resonance image (MRI)‐guided radiotherapy (MRgRT), 2D rapid imaging is commonly used to track moving targets with high temporal frequency to minimize gating latency. However, anatomical motion is not constrained to 2D, and a portion of the target may be missed during treatment if 3D motion is not evaluated. While some MRgRT systems attempt to capture 3D motion by sequentially tracking motion in 2D orthogonal imaging planes, this approach assesses 3D motion via independent 2D measurements at alternating instances, lacking a simultaneous 3D motion assessment in both imaging planes.PurposeWe hypothesized that a motion model could be derived from prior 2D orthogonal imaging to estimate 3D motion in both planes simultaneously. We present a manifold learning technique to estimate 3D motion from 2D orthogonal imaging.MethodsFive healthy volunteers were scanned under an IRB‐approved protocol using a 3.0 T Siemens Skyra simulator. Images of the liver dome were acquired during free breathing (FB) with a 2.6 mm × 2.6 mm in‐plane resolution for approximately 10 min in alternating sagittal and coronal planes at ∼5 frames per second. The motion model was derived using a combined manifold learning and alignment approach based on locally linear embedding (LLE). The model utilized the spatially overlapping MRI signal shared by both imaging planes to group together images that had similar signals, enabling motion estimation in both planes simultaneously. The model's motion estimates were compared to the ground truth motion derived in each newly acquired image using deformable registration. A simulated target was defined on the dome of the liver and used to evaluate model performance. The Dice similarity coefficient and distance between the model‐tracked and image‐tracked contour centroids were evaluated. Motion modeling error was estimated in the orthogonal plane by back‐propagating the motion to the currently imaged plane and by interpolating the motion between image acquisitions where ground truth motion was available.ResultsThe motion observed in the healthy volunteer studies ranged from 12.6 to 38.7 mm. On average, the model demonstrated sub‐millimeter precision and > 0.95 Dice coefficient compared to the ground truth motion observed in the currently imaged plane. The average Dice coefficient and centroid distance between the model‐tracked and ground truth target contours were 0.96 ± 0.03 and 0.26 mm ± 0.27 mm respectively across all volunteer studies. The out‐of‐plane centroid motion error was estimated to be 0.85 mm ± 1.07 mm and 1.26 mm ± 1.38 mm using the back‐propagation (BP) and interpolation error estimation methods.ConclusionsThe healthy volunteer studies indicate promising results using the proposed motion modeling technique. Out‐of‐plane modeling error was estimated to be higher but still demonstrated sub‐voxel motion accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.