Objective. Deep neural network (DNN) based methods have shown promising performances for low-dose computed tomography (LDCT) imaging. However, most of the DNN-based methods are trained on simulated labeled datasets, and the low-dose simulation algorithms are usually designed based on simple statistical models which deviate from the real clinical scenarios, which could lead to issues of overfitting, instability and poor robustness. To address these issues, in this work, we present a structure-preserved meta-learning uniting network (shorten as "SMU-Net") to suppress noise-induced artifacts and preserve structure details in the unlabeled LDCT imaging task in real scenarios. Approach. Specifically, the presented SMU-Net contains two networks, i.e., teacher network and student network. The teacher network is trained on simulated labeled dataset and then helps the student network train with the unlabeled LDCT images via themeta-learning strategy. The student network is trained on real LDCT dataset with the pseudo-labels generated by the teacher network. Moreover, the student network adopts the Co-teaching strategy to improve the robustness of the presented SMU-Net. Main results. We validate the proposed SMU-Net method on three public datasets and one real low-dose dataset. The visual image results indicate that the proposed SMU-Net has superior performance on reducing noise-induced artifacts and preserving structure details. And the quantitative results exhibit that the presented SMU-Netmethod generally obtains the highest signal-to-noise ratio (PSNR), the highest structural similarity index measurement (SSIM), and the lowest root-mean-square error (RMSE) values or the lowest natural image quality evaluator (NIQE) scores. Significance. We propose a meta learning strategy to obtain high-quality CT images in the LDCT imaging task, which is designed to take advantage of unlabeled CT images to promote the reconstruction performance in the LDCT environments.
Sparse-view computed tomographic (CT) image reconstruction aims to shorten scanning time, reduce radiation dose, and yield high-quality CT images simultaneously. Some researchers have developed deep learning (DL) based models for sparse-view CT reconstruction on the circular scanning trajectories. However, cone beam CT (CBCT) image reconstruction based on the circular trajectory is theoretically an ill-posed problem and cannot accurately reconstruct 3D CT images, while CBCT reconstruction of helical trajectory has the possibility of accurate reconstruction because it satisfies the tuy condition. Therefore, we propose a dual-domain helical projection-fidelity network (DHPF-Net) for sparse-view helical CT (SHCT) reconstruction. The DHPF-Net mainly consists of three modules, namely artifact reduction network (ARN), helical projection fidelity (HPF), and union restoration network (URN). Specifically, the ARN reconstructs high-quality CT images by suppressing the noise artifacts of sparse-view images. The HPF module uses the measured sparse-view projection to replace the projection values of the corresponding position in the projection of the ARN, which can ensure data fidelity of the final predicted projection and preserve the sharpness of the reconstructed CT images. The URN further improves the reconstruction performance by combining the sparse-view images, IRN images, and HPF images. In addition, in order to extract the structure information of adjacent images, leverage the structural self-similarity information, and avoid the expensive computational cost, we convert 3D volumn CT image into channel directions. The experimental results on the public dataset demonstrated that the proposed method can achieve a superior perfomance for sparse-view helical CT image reconstruction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.