The x-ray computed tomography (CT) is essential for medical diagnosis and industrial nondestructive testing. The aim of CT is to recover or reconstruct image from projection data. However, in particular, the reconstructed image usually suffers from complex artifacts and noise, such as the sampling is insufficient or low-dose CT. In order to deal with such issues and achieve reconstruction, a full automatic reconstruction (FAR) net is proposed for CT reconstruction via deep learning technique. Different with the usual network in deep learning reconstruction, the proposed neural network is an end-to-end network by which the image is predicted directly from projection data. The main challenge for such a FAR-net is the space complexity of the CT reconstruction in full-connected (FC) network. For a CT image with the size N × N , a typical requirement of memory space for the image reconstruction is O(N 4), for which is unacceptable by conventional calculation device, e.g. GPU workstation. In this paper, we utilize a series of smaller fully connected layers (FCL) to replace the huge Radon transform matrix based on the sparse nonnegative matrix factorization (SNMF) theory. By applying such an approach, the FAR-net is able to reconstruct images with the size 512×512 on only single workstation. The results of numerical experiments show that the projection matrix and the FAR-net is able to reconstruct the CT image from projection data with a superior quality to conventional methods such as optimization based approach. Meanwhile, the factorization for the inverse projection matrix is validated in simulation and real experiments.
Purpose Limited‐angle computed tomography is a challenging but important task in certain medical and industrial applications for nondestructive testing. The limited‐angle reconstruction problem is highly ill‐posed and conventional reconstruction algorithms would introduce heavy artifacts. Various models and methods have been proposed to improve the quality of reconstructions by introducing different priors regarding to the projection data or ideal images. However, the assumed priors might not be practically applicable to all limited‐angle reconstruction problems. Convolutional neural network (CNN) exhibits great promise in the modeling of data coupling and has recently become an important technique in medical imaging applications. Although existing CNN methods have demonstrated promising results, their robustness is still a concern. In this paper, in light of the theory of visible and invisible boundaries, we propose an alternating edge‐preserving diffusion and smoothing neural network (AEDSNN) for limited‐angle reconstruction that builds the visible boundaries as priors into its structure. The proposed method generalizes the alternating edge‐preserving diffusion and smoothing (AEDS) method for limited‐angle reconstruction developed in the literature by replacing its regularization terms by CNNs, by which the piecewise constant assumption assumed by AEDS is effectively relaxed. Methods The AEDSNN is derived by unrolling the AEDS algorithm. AEDSNN consists of several blocks, and each block corresponds to one iteration of the AEDS algorithm. In each iteration of the AEDS algorithm, three subproblems are sequentially solved. So, each block of AEDSNN possesses three main layers: data matching layer, x‐direction regularization layer for visible edges diffusion, and y‐direction regularization layer for artifacts suppressing. The data matching layer is implemented by conventional ordered‐subset simultaneous algebraic reconstruction technique (OS‐SART) reconstruction algorithm, while the two regularization layers are modeled by CNNs for more intelligent and better encoding of priors regarding to the reconstructed images. To further strength the visible edge prior, the attention mechanism and the pooling layers are incorporated into AEDSNN to facilitate the procedure of edge‐preserving diffusion from visible edges. Results We have evaluated the performance of AEDSNN by comparing it with popular algorithms for limited‐angle reconstruction. Experiments on the medical dataset show that the proposed AEDSNN effectively breaks through the piecewise constant assumption usually assumed by conventional reconstruction algorithms, and works much better for piecewise smooth images with nonsharp edges. Experiments on the printed circuit board (PCB) dataset show that AEDSNN can better encode and utilize the visible edge prior, and its reconstructions are consistently better compared to the competing algorithms. Conclusions A deep‐learning approach for limited‐angle reconstruction is proposed in this paper, which significantly ...
To solve the problem of learning based computed tomography (CT) reconstruction, several reconstruction networks were invented. However, applying neural network to tomographic reconstruction still remains challenging due to unacceptable memory space requirement. In this study, we presents a novel lightweight block reconstruction network (LBRN), which transforms the reconstruction operator into a deep neural network by unrolling the filter back-projection (FBP) method. Specifically, the proposed network contains two main modules, which, respectively, correspond to the filter and back-projection of FBP method. The first module of LBRN decouples the relationship of Radon transform between the reconstructed image and the projection data. Therefore, the following module, block back-projection module, can use the block reconstruction strategy. Due to each image block is only connected with part filtered projection data, the network structure is greatly simplified and the parameters of the whole network is dramatically reduced. Moreover, this approach is trained end-to-end, working directly from raw projection data and does not depend on any initial images. Five reconstruction experiments are conducted to evaluate the performance of the proposed LBRN: full angle, low-dose CT, region of interest (ROI), metal artifacts reduction and real data experiment. The results of the experiments show that the LBRN can be effectively introduced into the reconstruction process and has outstanding advantages in terms of different reconstruction problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.