Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
BackgroundCone beam computed tomography (CBCT) can be used to evaluate the inter‐fraction anatomical changes during the entire course for image‐guided radiotherapy (IGRT). However, CBCT artifacts from various sources restrict the full application of CBCT‐guided adaptive radiation therapy (ART).PurposeInter‐fraction anatomical changes during ART, including variations in tumor size and normal tissue anatomy, can affect radiation therapy (RT) efficacy. Acquiring high‐quality CBCT images that accurately capture patient‐ and fraction‐specific (PFS) anatomical changes is crucial for successful IGRT.MethodsTo enhance CBCT image quality, we proposed PFS lung diffusion models (PFS‐LDMs). The proposed PFS models use a pre‐trained general lung diffusion model (GLDM) as a baseline, which is trained on historical deformed CBCT (dCBCT)‐planning CT (pCT) paired data. For a given patient, a new PFS model is fine‐tuned on a CBCT‐deformed pCT (dpCT) pair after each fraction to learn the PFS knowledge for generating personalized synthetic CT (sCT) with quality comparable to pCT or dpCT. The learned PFS knowledge is the specific mapping relationships, including personal inter‐fraction anatomical changes between personalized CBCT‐dpCT pairs. The PFS‐LDMs were evaluated on an institutional lung cancer dataset, quantified by mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), normalized cross‐correlation (NCC), and structural similarity index measure (SSIM) metrics. We also compared our PFS‐LDMs with a mainstream GAN‐based model, demonstrating that our PFS fine‐tuning strategy could be applied to existing generative models.ResultsOur models showed remarkable improvements across all four evaluation metrics. The proposed PFS‐LDMs outperformed the GLDM, demonstrating the effectiveness of our proposed fine‐tuning strategy. The PFS model fine‐tuned with CBCT images from four prior fractions, reduced the MAE from 103.95 to 15.96 Hounsfield units (HU), and increased the mean PSNR, NCC, and SSIM from 25.36 dB to 33.57 dB, 0.77 to 0.98, and 0.75 to 0.97, respectively. Applying our PFS fine‐tuning strategy to a Cycle GAN model also showed improvements, with all four fine‐tuned PFS Cycle GAN (PFS‐CG) models outperforming the general Cycle GAN model. Overall, our proposed PFS fine‐tuning strategy improved CBCT image quality compared to both the pre‐correction and non‐fine‐tuned general models, with our proposed PFS‐LDMs yielding better performance than the GAN‐based model across all metrics.ConclusionsOur proposed PFS‐LDMs significantly improve CBCT image quality with increased HU accuracy and fewer artifacts, thus better capturing inter‐fraction anatomical changes. This lays the groundwork for enabling CBCT‐based ART, which could enhance clinical efficiency and achieve personalized high‐precision treatment by accounting for inter‐fraction anatomical changes.
BackgroundCone beam computed tomography (CBCT) can be used to evaluate the inter‐fraction anatomical changes during the entire course for image‐guided radiotherapy (IGRT). However, CBCT artifacts from various sources restrict the full application of CBCT‐guided adaptive radiation therapy (ART).PurposeInter‐fraction anatomical changes during ART, including variations in tumor size and normal tissue anatomy, can affect radiation therapy (RT) efficacy. Acquiring high‐quality CBCT images that accurately capture patient‐ and fraction‐specific (PFS) anatomical changes is crucial for successful IGRT.MethodsTo enhance CBCT image quality, we proposed PFS lung diffusion models (PFS‐LDMs). The proposed PFS models use a pre‐trained general lung diffusion model (GLDM) as a baseline, which is trained on historical deformed CBCT (dCBCT)‐planning CT (pCT) paired data. For a given patient, a new PFS model is fine‐tuned on a CBCT‐deformed pCT (dpCT) pair after each fraction to learn the PFS knowledge for generating personalized synthetic CT (sCT) with quality comparable to pCT or dpCT. The learned PFS knowledge is the specific mapping relationships, including personal inter‐fraction anatomical changes between personalized CBCT‐dpCT pairs. The PFS‐LDMs were evaluated on an institutional lung cancer dataset, quantified by mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), normalized cross‐correlation (NCC), and structural similarity index measure (SSIM) metrics. We also compared our PFS‐LDMs with a mainstream GAN‐based model, demonstrating that our PFS fine‐tuning strategy could be applied to existing generative models.ResultsOur models showed remarkable improvements across all four evaluation metrics. The proposed PFS‐LDMs outperformed the GLDM, demonstrating the effectiveness of our proposed fine‐tuning strategy. The PFS model fine‐tuned with CBCT images from four prior fractions, reduced the MAE from 103.95 to 15.96 Hounsfield units (HU), and increased the mean PSNR, NCC, and SSIM from 25.36 dB to 33.57 dB, 0.77 to 0.98, and 0.75 to 0.97, respectively. Applying our PFS fine‐tuning strategy to a Cycle GAN model also showed improvements, with all four fine‐tuned PFS Cycle GAN (PFS‐CG) models outperforming the general Cycle GAN model. Overall, our proposed PFS fine‐tuning strategy improved CBCT image quality compared to both the pre‐correction and non‐fine‐tuned general models, with our proposed PFS‐LDMs yielding better performance than the GAN‐based model across all metrics.ConclusionsOur proposed PFS‐LDMs significantly improve CBCT image quality with increased HU accuracy and fewer artifacts, thus better capturing inter‐fraction anatomical changes. This lays the groundwork for enabling CBCT‐based ART, which could enhance clinical efficiency and achieve personalized high‐precision treatment by accounting for inter‐fraction anatomical changes.
Software is a central part of the scientific process and involved in obtaining, analysing, visualising and processing research data. Understanding the provenance of research requires an understanding of the involved software. However, software citations in scientific publications often are informal, what creates challenges when aiming at understanding software adoption. This paper provides an overview of the Software Mention Detection (SOMD) shared task conducted as part of the 2024 Natural Scientific Language Processing Workshop, aiming at advancing the state-of-the-art with respect to NLP methods for detecting software mentions and additional information in scholarly publications. The SOMD shared task encompasses three subtasks, concerned with software mention recognition (subtask I), recognition of additional information (subtask II) and classification of involved relations (subtask III). We present an overview of the tasks, received submissions and used techniques. The best submissions achieved F1 scores of 0.74 (subtask I), 0.838 (subtask II) and 0.911 (subtask III) indicating both task feasibility but also potential for further performance gains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.