X-ray computed tomography (CT) is widely used in clinical practice. The involved ionizing X-ray radiation, however, could increase cancer risk. Hence, the reduction of the radiation dose has been an important topic in recent years. Few-view CT image reconstruction is one of the main ways to minimize radiation dose and potentially allow a stationary CT architecture. In this paper, we propose a deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT image reconstruction from few-view data. Since the artifacts caused by few-view reconstruction appear in 3D instead of 2D geometry, a 3D deep network has a great potential for improving the image quality in a data-driven fashion. More specifically, our proposed DEAR-3D network aims at reconstructing 3D volume directly from clinical 3D spiral cone-beam image data. DEAR is validated on a publicly available abdominal CT dataset prepared and authorized by Mayo Clinic. Compared with other 2D deep-learning methods, the proposed DEAR-3D network can utilize 3D information to produce promising reconstruction results.
Keywords:Deep encoder-decoder adversarial network (DEAR), generative adversarial network (GAN), few-view CT, sparse-view CT, machine learning, deep learning.
X-ray computed tomography (CT) reconstructs cross-sectional images from projection data. However, ionizing X-ray radiation associated with CT scanning may induce cancer and genetic damage. Therefore, the reduction of radiation dose has attracted major attention. Few-view CT image reconstruction is an important topic to reduce the radiation dose. Recently, data-driven algorithms have shown great potential to solve the few-view CT problem. In this paper, we develop a dual network architecture (DNA) for reconstructing images directly from sinograms. In the proposed DNA method, a point-wise fully-connected layer learns the backprojection process requesting significantly less memory than the prior arts do. Proposed method uses O(C × N × N c ) parameters where N and N c denote the dimension of reconstructed images and number of projections respectively. C is an adjustable parameter that can be set as low as 1. Our experimental results demonstrate that DNA produces a competitive performance over the other state-of-the-art methods. Interestingly, natural images can be used to pre-train DNA to avoid overfitting when the amount of real patient images is limited. the computational complexity from O(N 4 ) in 8 to O(N 2 ×N d ), where N and N d denote the size of medical images and the number of detectors respectively. But one consumer-level GPU is still unable to handle the iCT-Net.In this study, we propose a dual network architecture (DNA) for CT image reconstruction, which reduces the required parameters from O(N 2 × N d ) of iCT-Net to O(C × N × N c ), where C is an adjustable hyper-parameter much less than N , which can be even set as low as 1. Theoretically, the larger the C, the better the performance. The proposed network is trainable on one consumer-level GPU such as NVIDIA Titan Xp or NVIDIA 1080 Ti. The proposed DNA is inspired by the FBP formulation to learn a refined filtration backprojection process for reconstructing images directly from sinograms. For X-ray CT, every single point in the sinogram domain only relates to pixels/voxels on an X-ray path through a field of view. With this intuition, the reconstruction process of DNA is learned in a point-wise manner, which is the key ingredient in DNA to alleviate the memory burden. Also, insufficient training dataset is another major issue in deep imaging. Inspired by, 8 we first pre-train the network using natural images from the ImageNet 10 and then fine-tune the model using real patient data. To our best knowledge, this is the first work using ImageNet images to pre-train a medical CT image reconstruction network. In the next section, we present a detailed explanation for our proposed DNA network. In the third section, we describe the experimental design, training data and reconstruction results. Finally, in the last section, we discuss relevant issues and conclude the paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.