Purpose The superior soft‐tissue contrast achieved using magnetic resonance imaging (MRI) compared to x‐ray computed tomography (CT) has led to the popularization of MRI‐guided radiation therapy (MR‐IGRT), especially in recent years with the advent of first and second generation MRI‐based therapy delivery systems for MR‐IGRT. The expanding use of these systems is driving interest in MRI‐only RT workflows in which MRI is the sole imaging modality used for treatment planning and dose calculations. To enable such a workflow, synthetic CT (sCT) data must be generated based on a patient’s MRI data so that dose calculations may be performed using the electron density information derived from CT images. In this study, we propose a novel deep spatial pyramid convolutional framework for the MRI‐to‐CT image‐to‐image translation task and compare its performance to the well established U‐Net architecture in a generative adversarial network (GAN) framework. Methods Our proposed framework utilizes atrous convolution in a method named atrous spatial pyramid pooling (ASPP) to significantly reduce the total number of parameters required to describe the model while effectively capturing rich, multi‐scale structural information in a manner that is not possible in the conventional framework. The proposed framework consists of a generative model composed of stacked encoders and decoders separated by the ASPP module, where atrous convolution is applied at increasing rates in parallel to encode large‐scale features. The performance of the proposed method is compared to that of the conventional GAN framework in terms of the time required to train the model and the image quality of the generated sCT as measured by the root mean square error (RMSE), structural similarity index (SSIM), and peak signal‐to‐noise ratio (PSNR) depending on the size of the training data set. Dose calculations based on sCT data generated using the proposed architecture are also compared to clinical plans to evaluate the dosimetric accuracy of the method. Results Significant reductions in training time and improvements in image quality are observed at every training data set size when the proposed framework is adopted instead of the conventional framework. Over 1042 test images, values of 17.7 ± 4.3 HU, 0.9995 ± 0.0003, and 71.7 ± 2.3 are observed for the RMSE, SSIM, and PSNR metrics, respectively. Dose distributions calculated based on sCT data generated using the proposed framework demonstrate passing rates equal to or greater than 98% using the 3D gamma index with a 2%/2 mm criterion. Conclusions The deep spatial pyramid convolutional framework proposed here demonstrates improved performance compared to the conventional GAN framework that has been applied to the image‐to‐image translation task of sCT generation. Adopting the method is a first step toward an MRI‐only RT workflow that enables widespread clinical applications for MR‐IGRT including online adaptive therapy.
Purpose Deep learning (DL)‐based super‐resolution (SR) reconstruction for magnetic resonance imaging (MRI) has recently been receiving attention due to the significant improvement in spatial resolution compared to conventional SR techniques. Challenges hindering the widespread implementation of these approaches remain, however. Low‐resolution (LR) MRIs captured in the clinic exhibit complex tissue structures obfuscated by noise that are difficult for a simple DL framework to handle. Moreover, training a robust network for a SR task requires abundant, perfectly matched pairs of LR and high‐resolution (HR) images that are often unavailable or difficult to collect. The purpose of this study is to develop a novel SR technique for MRI based on the concept of cascaded DL that allows for the reconstruction of high‐quality SR images in the presence of insufficient training data, an unknown translation model, and noise. Methods The proposed framework, based on the concept named cascaded deep learning, consists of three components: (a) a denoising autoencoder (DAE) trained using clinical LR noisy MRI scans that have been processed with a nonlocal means filter that generates denoised LR data; (b) a down‐sampling network (DSN) trained with a small amount of paired LR/HR data from volunteers that allows for the generation of perfectly paired LR/HR data for the training of a generative model; and (c) the proposed SR generative model (p‐SRG) trained with data generated by the DSN that maps from LR inputs to HR outputs. After training, LR clinical images may be fed through the DAE and p‐SRG to yield SR reconstructions of the LR input. The application of this framework was explored in two settings: 3D breath‐hold MRI axial SR reconstruction from LR axial scans (<3 sec/vol) and in the enhancement of the spatial resolution of LR 4D‐MRI acquisitions (0.5 sec/vol). Results The DSN produces LR scans from HR inputs with a higher fidelity to true, LR clinical scans compared to conventional k‐space down‐sampling methods based on the metrics of root mean square error (RMSE) and structural similarity index (SSIM). Furthermore, HR outputs generated by the p‐SRG exhibit improved scores in the peak signal‐to‐noise ratio, normalized RMSE, SSIM, and in the blind/reference‐less image spatial quality evaluator assessment compared to conventional approaches to MRI SR. Conclusions The robust, SR reconstruction method for MRI based on the novel cascaded deep learning framework is an end‐to‐end method for producing detail‐preserving SR reconstructions from noisy, LR clinical MRI scans. Fourfold enhancements in spatial resolution facilitate target delineation and motion management during radiation therapy, enabling precise MRI‐guided radiation therapy with 3D LR breath‐hold MRI and 4D‐MRI in a clinically feasible time frame.
BackgroundTo simplify the adaptive treatment planning workflow while achieving the optimal tumor-dose coverage in pancreatic cancer patients undergoing daily adaptive magnetic resonance image guided radiation therapy (MR-IGRT).MethodsIn daily adaptive MR-IGRT, the plan objective function constructed during simulation is used for plan re-optimization throughout the course of treatment. In this study, we have constructed the initial objective functions using two methods for 16 pancreatic cancer patients treated with the ViewRay™ MR-IGRT system: 1) the conventional method that handles the stomach, duodenum, small bowel, and large bowel as separate organs at risk (OARs) and 2) the OAR grouping method. Using OAR grouping, a combined OAR structure that encompasses the portions of these four primary OARs within 3 cm of the planning target volume (PTV) is created. OAR grouping simulation plans were optimized such that the target coverage was comparable to the clinical simulation plan constructed in the conventional manner. In both cases, the initial objective function was then applied to each successive treatment fraction and the plan was re-optimized based on the patient’s daily anatomy. OAR grouping plans were compared to conventional plans at each fraction in terms of coverage of the PTV and the optimized PTV (PTV OPT), which is the result of the subtraction of overlapping OAR volumes with an additional margin from the PTV.ResultsPlan performance was enhanced across a majority of fractions using OAR grouping. The percentage of the volume of the PTV covered by 95% of the prescribed dose (D95) was improved by an average of 3.87 ± 4.29% while D95 coverage of the PTV OPT increased by 3.98 ± 4.97%. Finally, D100 coverage of the PTV demonstrated an average increase of 6.47 ± 7.16% and a maximum improvement of 20.19%.ConclusionsIn this study, our proposed OAR grouping plans generally outperformed conventional plans, especially when the conventional simulation plan favored or disregarded an OAR through the assignment of distinct weighting parameters relative to the other critical structures. OAR grouping simplifies the MR-IGRT adaptive treatment planning workflow at simulation while demonstrating improved coverage compared to delivered pancreatic cancer treatment plans in daily adaptive radiation therapy.
Purpose: Applications of deep learning (DL) are essential to realizing an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability of a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high-quality, consistent annotations for supervised learning applications. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow-an approach we term Intentional Deep Overfit Learning (IDOL). Methods: Implementing the IDOL framework in any task in radiotherapy consists of two training stages: (1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and (2) intentionally overfitting this general model to a small training dataset-specific the patient of interest (N + 1) generated through perturbations and augmentations of the available task-and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is, thus, widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the autocontouring task on replanning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. Results: In the replanning CT autocontouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework. Conclusions: In this study, we propose a novel IDOL framework for ART and demonstrate its feasibility using three ART tasks.We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data but existing prior information, which is usually true in the medical setting in general and is especially true in ART. K E Y W O R D S adaptive radiotherapy, deep learning, overfitting, personalized model 488
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.