2018
DOI: 10.48550/arxiv.1804.04289
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Comparison of projection domain, image domain, and comprehensive deep learning for sparse-view X-ray CT image reconstruction

Kaichao Liang,
Hongkai Yang,
Yuxiang Xing

Abstract: X-ray Computed Tomography (CT) imaging has been widely used in clinical diagnosis, non-destructive examination, and public safety inspection. Sparse-view (sparse view) CT has great potential in radiation dose reduction and scan acceleration. However, sparse view CT data is insufficient and traditional reconstruction results in severe streaking artifacts. In this work, based on deep learning, we compared image reconstruction performance for sparse view CT reconstruction with projection domain network, image dom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 17 publications
0
12
0
Order By: Relevance
“…The benefits of including the projections within the network were demonstrated in Ref. [6], especially for sparse-view computed tomography (CT) data. One can instead compute the FBP from the low-dose projections to feed a network that removes noise and artifacts, which remain in the obtained image: Different structures can be chosen to generate the desired reconstruction, such as U-NET 7 in Ref.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The benefits of including the projections within the network were demonstrated in Ref. [6], especially for sparse-view computed tomography (CT) data. One can instead compute the FBP from the low-dose projections to feed a network that removes noise and artifacts, which remain in the obtained image: Different structures can be chosen to generate the desired reconstruction, such as U-NET 7 in Ref.…”
Section: Introductionmentioning
confidence: 99%
“…6 trainable parameters, and VGG loss implies 20 × 10 6 extra TA B L E 2 Optimal hyperparameters (HPs) for each method. These HPs have been optimized on a validation set consisting of 20% of the slices obtained from the 10 training volumes.…”
mentioning
confidence: 99%
“…Fewer projection data are acquired with uniform spacing over complete angular range. Similar approaches have been explored for this sparse-view CT: image post-processing [33][34][35][36], projection data completion [37][38][39], combination of projection data completion and image post-processing [40], and end-to-end learning [41,42].…”
Section: Related Workmentioning
confidence: 99%
“…Since neural networks are capable of predicting unknown data in the Radon and image domains, a natural idea is to combine these two domains [48,49,34,50,51,52] to acquire better restoration results. Specifically, it first complements the Radon data, and then remove the residual artifacts and noises on images converted from the full-view Radon data.…”
Section: Introductionmentioning
confidence: 99%
“…In 2018, Zhao et al proposed SVGAN [34], an artifacts reduction method for low-dose and sparse-view CT via a single model trained by GAN. In 2019, Liang et al [49] proposed a comprehensive network combining projection and image domains. The projection estimation network is based on Res-CNN structure, and the image domain network takes the advantage of U-Net.…”
Section: Introductionmentioning
confidence: 99%