PurposeTo propose a synthesis method of pseudo-CT (CTCycleGAN) images based on an improved 3D cycle generative adversarial network (CycleGAN) to solve the limitations of cone-beam CT (CBCT), which cannot be directly applied to the correction of radiotherapy plans.MethodsThe improved U-Net with residual connection and attention gates was used as the generator, and the discriminator was a full convolutional neural network (FCN). The imaging quality of pseudo-CT images is improved by adding a 3D gradient loss function. Fivefold cross-validation was performed to validate our model. Each pseudo CT generated is compared against the real CT image (ground truth CT, CTgt) of the same patient based on mean absolute error (MAE) and structural similarity index (SSIM). The dice similarity coefficient (DSC) coefficient was used to evaluate the segmentation results of pseudo CT and real CT. 3D CycleGAN performance was compared to 2D CycleGAN based on normalized mutual information (NMI) and peak signal-to-noise ratio (PSNR) metrics between the pseudo-CT and CTgt images. The dosimetric accuracy of pseudo-CT images was evaluated by gamma analysis.ResultsThe MAE metric values between the CTCycleGAN and the real CT in fivefold cross-validation are 52.03 ± 4.26HU, 50.69 ± 5.25HU, 52.48 ± 4.42HU, 51.27 ± 4.56HU, and 51.65 ± 3.97HU, respectively, and the SSIM values are 0.87 ± 0.02, 0.86 ± 0.03, 0.85 ± 0.02, 0.85 ± 0.03, and 0.87 ± 0.03 respectively. The DSC values of the segmentation of bladder, cervix, rectum, and bone between CTCycleGAN and real CT images are 91.58 ± 0.45, 88.14 ± 1.26, 87.23 ± 2.01, and 92.59 ± 0.33, respectively. Compared with 2D CycleGAN, the 3D CycleGAN based pseudo-CT image is closer to the real image, with NMI values of 0.90 ± 0.01 and PSNR values of 30.70 ± 0.78. The gamma pass rate of the dose distribution between CTCycleGAN and CTgt is 97.0% (2%/2 mm).ConclusionThe pseudo-CT images obtained based on the improved 3D CycleGAN have more accurate electronic density and anatomical structure.
The registration of multi-resolution optical remote sensing images has been widely used in image fusion, change detection, and image stitching. However, traditional registration methods achieve poor accuracy in the registration of multiresolution remote sensing images. In this study, we propose a framework for generating deep features via a deep residual encoder (DRE) fused with shallow features for multi-resolution remote sensing image registration. Through an L2 normalization Siamese network (L2-Siamese) based on the DRE, the multiscale loss function is used to learn the attribute characteristics and distance characteristics of two key points and obtain the trained feature extractor. Finally, the DRE is used to extract the deep features of the key points and their neighbors, which are concatenated with the shallow features into a fusion feature vector to complete the image registration. We performed comprehensive experiments on four sets of multi-resolution optical remote sensing images and two sets of synthetic aperture radar images. The results demonstrate that the proposed registration model can achieve sub-pixel registration. The relative registration accuracy improved by 1.6 − 7.5%, whereas the overall performance improved by 4.5 − 14.1%.
Objective: A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. Approach: The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. Main results: There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. Significance: The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Herein, a Harris corner detection algorithm is proposed based on the concepts of iterated threshold segmentation and adaptive iterative threshold (AIT–Harris), and a stepwise local stitching algorithm is used to obtain wide-field ultrasound (US) images. Cone-beam computer tomography (CBCT) and US images from 9 cervical cancer patients and 1 prostate cancer patient were examined. In the experiment, corner features were extracted based on the AIT–Harris, Harris, and Morave algorithms. Accordingly, wide-field ultrasonic images were obtained based on the extracted features after local stitching, and the corner matching rates of all tested algorithms were compared. The accuracies of the drawn contours of organs at risk (OARs) were compared based on the stitched ultrasonic images and CBCT. The corner matching rate of the Morave algorithm was compared with those obtained by the Harris and AIT–Harris algorithms, and paired sample t tests were conducted ( t = 6.142, t = 31.859, P < .05). The results showed that the differences were statistically significant. The average Dice similarity coefficient between the automatically delineated bladder region based on wide-field US images and the manually delineated bladder region based on ground truth CBCT images was 0.924, and the average Jaccard coefficient was 0.894. The proposed algorithm improved the accuracy of corner detection, and the stitched wide-field US image could modify the delineation range of OARs in the pelvic cavity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.