Background
There has been growing interest in low‐dose computed tomography (LDCT) for reducing the X‐ray radiation to patients. However, LDCT always suffers from complex noise in reconstructed images. Although deep learning‐based methods have shown their strong performance in LDCT denoising, most of them require a large number of paired training data of normal‐dose CT (NDCT) images and LDCT images, which are hard to acquire in the clinic. Lack of paired training data significantly undermines the practicability of supervised deep learning‐based methods. To alleviate this problem, unsupervised or weakly supervised deep learning‐based methods are required.
Purpose
We aimed to propose a method that achieves LDCT denoising without training pairs. Specifically, we first trained a neural network in a weakly supervised manner to simulate LDCT images from NDCT images. Then, simulated training pairs could be used for supervised deep denoising networks.
Methods
We proposed a weakly supervised method to learn the degradation of LDCT from unpaired LDCT and NDCT images. Concretely, LDCT and normal‐dose images were fed into one shared flow‐based model and projected to the latent space. Then, the degradation between low‐dose and normal‐dose images was modeled in the latent space. Finally, the model was trained by minimizing the negative log‐likelihood loss with no requirement of paired training data. After training, an NDCT image can be input to the trained flow‐based model to generate the corresponding LDCT image. The simulated image pairs of NDCT and LDCT can be further used to train supervised denoising neural networks for test.
Results
Our method achieved much better performance on LDCT image simulation compared with the most widely used image‐to‐image translation method, CycleGAN, according to the radial noise power spectrum. The simulated image pairs could be used for any supervised LDCT denoising neural networks. We validated the effectiveness of our generated image pairs on a classic convolutional neural network, REDCNN, and a novel transformer‐based model, TransCT. Our method achieved mean peak signal‐to‐noise ratio (PSNR) of 24.43dB, mean structural similarity (SSIM) of 0.785 on an abdomen CT dataset, mean PSNR of 33.88dB, mean SSIM of 0.797 on a chest CT dataset, which outperformed several traditional CT denoising methods, the same network trained by CycleGAN‐generated data, and a novel transfer learning method. Besides, our method was on par with the supervised networks in terms of visual effects.
Conclusion
We proposed a flow‐based method to learn LDCT degradation from only unpaired training data. It achieved impressive performance on LDCT synthesis. Next, we could train neural networks with the generated paired data for LDCT denoising. The denoising results are better than traditional and weakly supervised methods, comparable to supervised deep learning methods.