2022
DOI: 10.3390/cancers14184534
|View full text |Cite
|
Sign up to set email alerts
|

Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network

Abstract: Deep convolutional neural network (CNN) helped enhance image quality of cone-beam computed tomography (CBCT) by generating synthetic CT. Most of the previous works, however, trained network by intensity-based loss functions, possibly undermining to promote image feature similarity. The verifications were not sufficient to demonstrate clinical applicability, either. This work investigated the effect of variable loss functions combining feature- and intensity-driven losses in synthetic CT generation, followed by… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 40 publications
0
0
0
Order By: Relevance
“…Jihong et al [24] reported a rate of 95.7% ± 1.9% for sCT images with uncorrected CBCT and 97.1% ± 1.9% for sCT images with corrected CBCT. Meanwhile, Yoo et al [25] achieved an impressive result of 99.7% ± 0.0% for sCT using a combination loss functions for model training. Both studies utilized advanced deep learning techniques, with Jihong et al [24] employing unsupervised learning via CycleGAN with HU correction, and Yoo et al [25] enhancing performance through the integration of perceptual loss into L1 and structural similarity loss functions during model training.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Jihong et al [24] reported a rate of 95.7% ± 1.9% for sCT images with uncorrected CBCT and 97.1% ± 1.9% for sCT images with corrected CBCT. Meanwhile, Yoo et al [25] achieved an impressive result of 99.7% ± 0.0% for sCT using a combination loss functions for model training. Both studies utilized advanced deep learning techniques, with Jihong et al [24] employing unsupervised learning via CycleGAN with HU correction, and Yoo et al [25] enhancing performance through the integration of perceptual loss into L1 and structural similarity loss functions during model training.…”
Section: Discussionmentioning
confidence: 99%
“…Meanwhile, Yoo et al [25] achieved an impressive result of 99.7% ± 0.0% for sCT using a combination loss functions for model training. Both studies utilized advanced deep learning techniques, with Jihong et al [24] employing unsupervised learning via CycleGAN with HU correction, and Yoo et al [25] enhancing performance through the integration of perceptual loss into L1 and structural similarity loss functions during model training. These findings suggest that unsupervised deep learning and specialized loss functions can enhance the quality of sCT images, and preprocessing techniques such as HU correction can further improve outcomes.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The initial efforts using deep‐learning‐based solutions were based on U‐Nets 8,13 . Currently, some studies propose the use of DenseNet, 15 while others use conditional generative adversarial networks (cGANs) to generate pCT images 9–12,16 …”
Section: State‐of‐the‐artmentioning
confidence: 99%
“…Hardware-based methods attempt to decrease the influence caused by scattering by utilizing an anti-scatter grid when acquiring CBCT images [15,16]. Image post-processing methods mainly consist of deformation of pCT [17][18][19][20][21], an estimation of scatter kernels [22], Monte Carlo simulations of scatter distribution [23,24], histogram matching [25], and deep learning-based methods [26][27][28][29][30][31]. Deformation of pCT is one of the commonly used methods, which is based on deformable registration between pCT and CBCT.…”
Section: Introductionmentioning
confidence: 99%