Medical image registration is a vital component of many medical procedures, such as imageguided radiotherapy (IGRT), as it allows for more accurate dose-delivery and better management of side effects. Recently, the successful implementation of deep learning (DL) in various fields has prompted many research groups to apply DL to three-dimensional (3D) medical image registration. Several of these efforts have led to promising results. This review summarized the progress made in DL-based 3D image registration over the past 5 years and identify existing challenges and potential avenues for further research.The collected studies were statistically analyzed based on the region of interest (ROI), image modality, supervision method, and registration evaluation metrics. The studies were classified into three categories: deep iterative registration, supervised registration, and unsupervised registration. The studies are thoroughly reviewed and their unique contributions are highlighted. A summary is presented following a review of each category of study, discussing its advantages, challenges, and trends. Finally, the common challenges for all categories are discussed, and potential future research topics are identified.
PurposeTo investigate the role of different multi-organ omics-based prediction models for pre-treatment prediction of Adaptive Radiotherapy (ART) eligibility in patients with nasopharyngeal carcinoma (NPC).Methods and MaterialsPre-treatment contrast-enhanced computed tomographic and magnetic resonance images, radiotherapy dose and contour data of 135 NPC patients treated at Hong Kong Queen Elizabeth Hospital were retrospectively analyzed for extraction of multi-omics features, namely Radiomics (R), Morphology (M), Dosiomics (D), and Contouromics (C), from a total of eight organ structures. During model development, patient cohort was divided into a training set and a hold-out test set in a ratio of 7 to 3 via 20 iterations. Four single-omics models (R, M, D, C) and four multi-omics models (RD, RC, RM, RMDC) were developed on the training data using Ridge and Multi-Kernel Learning (MKL) algorithm, respectively, under 10-fold cross validation, and evaluated on hold-out test data using average area under the receiver-operator-characteristics curve (AUC). The best-performing single-omics model was first determined by comparing the AUC distribution across the 20 iterations among the four single-omics models using two-sided student t-test, which was then retrained using MKL algorithm for a fair comparison with the four multi-omics models.ResultsThe R model significantly outperformed all other three single-omics models (all p-value<0.0001), achieving an average AUC of 0.942 (95%CI: 0.938-0.946) and 0.918 (95%CI: 0.903-0.933) in training and hold-out test set, respectively. When trained with MKL, the R model (R_MKL) yielded an increased AUC of 0.984 (95%CI: 0.981-0.988) and 0.927 (95%CI: 0.905-0.948) in training and hold-out test set respectively, while demonstrating no significant difference as compared to all studied multi-omics models in the hold-out test sets. Intriguingly, Radiomic features accounted for the majority of the final selected features, ranging from 64% to 94%, in all the studied multi-omics models.ConclusionsAmong all the studied models, the Radiomic model was found to play a dominant role for ART eligibility in NPC patients, and Radiomic features accounted for the largest proportion of features in all the multi-omics models.
Radiomic model reliability is a central premise for its clinical translation. Presently, it is assessed using test–retest or external data, which, unfortunately, is often scarce in reality. Therefore, we aimed to develop a novel image perturbation-based method (IPBM) for the first of its kind toward building a reliable radiomic model. We first developed a radiomic prognostic model for head-and-neck cancer patients on a training (70%) and evaluated on a testing (30%) cohort using C-index. Subsequently, we applied the IPBM to CT images of both cohorts (Perturbed-Train and Perturbed-Test cohort) to generate 60 additional samples for both cohorts. Model reliability was assessed using intra-class correlation coefficient (ICC) to quantify consistency of the C-index among the 60 samples in the Perturbed-Train and Perturbed-Test cohorts. Besides, we re-trained the radiomic model using reliable RFs exclusively (ICC > 0.75) to validate the IPBM. Results showed moderate model reliability in Perturbed-Train (ICC: 0.565, 95%CI 0.518–0.615) and Perturbed-Test (ICC: 0.596, 95%CI 0.527–0.670) cohorts. An enhanced reliability of the re-trained model was observed in Perturbed-Train (ICC: 0.782, 95%CI 0.759–0.815) and Perturbed-Test (ICC: 0.825, 95%CI 0.782–0.867) cohorts, indicating validity of the IPBM. To conclude, we demonstrated capability of the IPBM toward building reliable radiomic models, providing community with a novel model reliability assessment strategy prior to prospective evaluation.
Background Most available four‐dimensional (4D)‐magnetic resonance imaging (MRI) techniques are limited by insufficient image quality and long acquisition times or require specially designed sequences or hardware that are not available in the clinic. These limitations have greatly hindered the clinical implementation of 4D‐MRI. Purpose This study aims to develop a fast ultra‐quality (UQ) 4D‐MRI reconstruction method using a commercially available 4D‐MRI sequence and dual‐supervised deformation estimation model (DDEM). Methods Thirty‐nine patients receiving radiotherapy for liver tumors were included. Each patient was scanned using a time‐resolved imaging with interleaved stochastic trajectories (TWIST)–lumetric interpolated breath‐hold examination (VIBE) MRI sequence to acquire 4D‐magnetic resonance (MR) images. They also received 3D T1‐/T2‐weighted MRI scans as prior images, and UQ 4D‐MRI at any instant was considered a deformation of them. A DDEM was developed to obtain a 4D deformable vector field (DVF) from 4D‐MRI data, and the prior images were deformed using this 4D‐DVF to generate UQ 4D‐MR images. The registration accuracies of the DDEM, VoxelMorph (normalized cross‐correlation [NCC] supervised), VoxelMorph (end‐to‐end point error [EPE] supervised), and the parametric total variation (pTV) algorithm were compared. Tumor motion on UQ 4D‐MRI was evaluated quantitatively using region of interest (ROI) tracking errors, while image quality was evaluated using the contrast‐to‐noise ratio (CNR), lung–liver edge sharpness, and perceptual blur metric (PBM). Results The registration accuracy of the DDEM was significantly better than those of VoxelMorph (NCC supervised), VoxelMorph (EPE supervised), and the pTV algorithm (all, p < 0.001), with an inference time of 69.3 ± 5.9 ms. UQ 4D‐MRI yielded ROI tracking errors of 0.79 ± 0.65, 0.50 ± 0.55, and 0.51 ± 0.58 mm in the superior–inferior, anterior–posterior, and mid–lateral directions, respectively. From the original 4D‐MRI to UQ 4D‐MRI, the CNR increased from 7.25 ± 4.89 to 18.86 ± 15.81; the lung–liver edge full‐width‐at‐half‐maximum decreased from 8.22 ± 3.17 to 3.65 ± 1.66 mm in the in‐plane direction and from 8.79 ± 2.78 to 5.04 ± 1.67 mm in the cross‐plane direction, and the PBM decreased from 0.68 ± 0.07 to 0.38 ± 0.01. Conclusion This novel DDEM method successfully generated UQ 4D‐MR images based on a commercial 4D‐MRI sequence. It shows great promise for improving liver tumor motion management during radiation therapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.