Deep learning (DL)-based auto-segmentation has the potential for accurate organ delineation in radiotherapy applications but requires large amounts of clean labeled data to train a robust model. However, annotating medical images is extremely time-consuming and requires clinical expertise, especially for segmentation that demands voxel-wise labels. On the other hand, medical images without annotations are abundant and highly accessible. To alleviate the influence of the limited number of clean labels, we propose a weakly supervised DL training approach using deformable image registration (DIR)-based annotations, leveraging the abundance of unlabeled data. We generate pseudo-contours by utilizing DIR to propagate atlas contours onto abundant unlabeled images and train a robust DL-based segmentation model. With 10 labeled TCIA dataset and 50 unlabeled CT scans from our institution, our model achieved Dice similarity coefficient of 87.9%, 73.4%, 73.4%, 63.2% and 61.0% on mandible, left & right parotid glands and left & right submandibular glands of TCIA test set and competitive performance on our institutional clinical dataset and a third party (PDDCA) dataset. Experimental results demonstrated the proposed method outperformed traditional multi-atlas DIR methods and fully supervised limited data training and is promising for DL-based medical image segmentation application with limited annotated data.
Purpose Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART process is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as cone beam computed tomography (CBCT). Direct application of deep learning (DL)‐based segmentation to CBCT images suffered from issues such as low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration‐guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. Methods The RgDL framework is composed of two components: image registration and RgDL segmentation. The image registration algorithm transforms/deforms planning contours that were subsequently used as guidance by the DL model to obtain accurate final segmentations. We had two implementations of the proposed framework—Rig‐RgDL (Rig for rigid body) and Def‐RgDL (Def for deformable)—with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm, respectively, and U‐Net as the DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical head‐and‐neck dataset. Results Compared to the baseline approaches using the registration or the DL alone, RgDLs achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSCs) and other distance‐based metrics. Rig‐RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The average DSC of Def‐RgDL was 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time required by the DL model component to generate final segmentations of seven OARs was less than 1 s in RgDL. By examining the contours from RgDLs and DL case by case, we found that RgDL was less susceptible to image artifacts. We also studied how the performances of RgDL and DL vary with the size of the training dataset. The DSC of DL dropped by 12.1% as the number of training data decreased from 22 to 5, whereas RgDL only dropped by 3.4%. Conclusion By incorporating the patient‐specific registration guidance to a population‐based DL segmentation model, RgDL framework overcame the obstacles associated with online CBCT segmentation, including low image quality and insufficient training data, and achieved better segmentation accuracy than baseline methods. The resulting segmentation accuracy and efficiency show promise for applying this RgDL framework for online ART.
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the markerguidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' locations were then converted to probability maps using a distance-transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for DSC, HD95, and ASD respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the online treatment planning procedure of PBI, such as GammaPod based PBI.
Objectives: To develop a rapid and accurate 4D deformable image registration (DIR) approach for online adaptive radiotherapy. Methods: We propose a deep learning (DL)-based few-shot registration network (FR-Net) to generate deformation vector fields from each respiratory phase to an implicit reference image, thereby mitigating the bias introduced by the selection of reference images. The proposed FR-Net is pretrained with limited unlabeled 4D data and further optimized by maximizing the intensity similarity of one specific four-dimensional computed tomography (4DCT) scan. Because of the learning ability of DL models, the few-shot learning strategy facilitates the generalization of the model to other 4D data sets and the acceleration of the optimization process. Results: The proposed FR-Net is evaluated for 4D groupwise and 3D pairwise registration on thoracic 4DCT data sets DIR-Lab and POPI. FR-Net displays an averaged target registration error of 1.48 mm and 1.16 mm between the maximum inhalation and exhalation phases in the 4DCT of DIR-Lab and POPI, respectively, with approximately 2 min required to optimize one 4DCT. Overall, FR-Net outperforms state-of-the-art methods in terms of registration accuracy and exhibits a low computational time. Conclusion: We develop a few-shot groupwise DIR algorithm for 4DCT images. The promising registration performance and computational efficiency demonstrate the prospective applications of this approach in registration tasks for online adaptive radiotherapy. Advances in knowledge: This work exploits DL models to solve the optimization problem in registering 4DCT scans while combining groupwise registration and few-shot learning strategy to solve the problem of consuming computational time and inferior registration accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.