Background: Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deeplearning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures.Methods: A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-theart CycleGAN-based methods.Results: The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG).
Conclusions:The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy.