Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
BackgroundAdaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real‐time or online ART workflow. In response to this challenge, approaches to auto‐segmentation involving deformable image registration, atlas‐based segmentation, and deep learning‐based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.PurposeTo address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto‐segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)‐IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.MethodsThe PHL‐IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m < n) using the similarity metrics (mean square error (MSE), peak signal‐to‐noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL‐IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.ResultsImplementing the PHL‐IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.810.05 with the general model, 0.83 for the continual model, 0.83 for the conventional IDOL model to an average of 0.87 with the PHL‐IDOL model. Similarly, the Hausdorff distance decreased from 3.06 with the general model, 2.84 for the continual model, 2.79 for the conventional IDOL model and 2.36 for the PHL‐IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL‐IDOL model.ConclusionThe PHL‐IDOL framework applied to the auto‐segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient‐specific prior information in a task central to online ART workflows.
BackgroundAdaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real‐time or online ART workflow. In response to this challenge, approaches to auto‐segmentation involving deformable image registration, atlas‐based segmentation, and deep learning‐based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model.PurposeTo address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto‐segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)‐IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation.MethodsThe PHL‐IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m < n) using the similarity metrics (mean square error (MSE), peak signal‐to‐noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL‐IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients.ResultsImplementing the PHL‐IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.810.05 with the general model, 0.83 for the continual model, 0.83 for the conventional IDOL model to an average of 0.87 with the PHL‐IDOL model. Similarly, the Hausdorff distance decreased from 3.06 with the general model, 2.84 for the continual model, 2.79 for the conventional IDOL model and 2.36 for the PHL‐IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL‐IDOL model.ConclusionThe PHL‐IDOL framework applied to the auto‐segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient‐specific prior information in a task central to online ART workflows.
BackgroundCone beam computed tomography (CBCT) can be used to evaluate the inter‐fraction anatomical changes during the entire course for image‐guided radiotherapy (IGRT). However, CBCT artifacts from various sources restrict the full application of CBCT‐guided adaptive radiation therapy (ART).PurposeInter‐fraction anatomical changes during ART, including variations in tumor size and normal tissue anatomy, can affect radiation therapy (RT) efficacy. Acquiring high‐quality CBCT images that accurately capture patient‐ and fraction‐specific (PFS) anatomical changes is crucial for successful IGRT.MethodsTo enhance CBCT image quality, we proposed PFS lung diffusion models (PFS‐LDMs). The proposed PFS models use a pre‐trained general lung diffusion model (GLDM) as a baseline, which is trained on historical deformed CBCT (dCBCT)‐planning CT (pCT) paired data. For a given patient, a new PFS model is fine‐tuned on a CBCT‐deformed pCT (dpCT) pair after each fraction to learn the PFS knowledge for generating personalized synthetic CT (sCT) with quality comparable to pCT or dpCT. The learned PFS knowledge is the specific mapping relationships, including personal inter‐fraction anatomical changes between personalized CBCT‐dpCT pairs. The PFS‐LDMs were evaluated on an institutional lung cancer dataset, quantified by mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), normalized cross‐correlation (NCC), and structural similarity index measure (SSIM) metrics. We also compared our PFS‐LDMs with a mainstream GAN‐based model, demonstrating that our PFS fine‐tuning strategy could be applied to existing generative models.ResultsOur models showed remarkable improvements across all four evaluation metrics. The proposed PFS‐LDMs outperformed the GLDM, demonstrating the effectiveness of our proposed fine‐tuning strategy. The PFS model fine‐tuned with CBCT images from four prior fractions, reduced the MAE from 103.95 to 15.96 Hounsfield units (HU), and increased the mean PSNR, NCC, and SSIM from 25.36 dB to 33.57 dB, 0.77 to 0.98, and 0.75 to 0.97, respectively. Applying our PFS fine‐tuning strategy to a Cycle GAN model also showed improvements, with all four fine‐tuned PFS Cycle GAN (PFS‐CG) models outperforming the general Cycle GAN model. Overall, our proposed PFS fine‐tuning strategy improved CBCT image quality compared to both the pre‐correction and non‐fine‐tuned general models, with our proposed PFS‐LDMs yielding better performance than the GAN‐based model across all metrics.ConclusionsOur proposed PFS‐LDMs significantly improve CBCT image quality with increased HU accuracy and fewer artifacts, thus better capturing inter‐fraction anatomical changes. This lays the groundwork for enabling CBCT‐based ART, which could enhance clinical efficiency and achieve personalized high‐precision treatment by accounting for inter‐fraction anatomical changes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.