2019
DOI: 10.1109/tmi.2018.2884053
|View full text |Cite
|
Sign up to set email alerts
|

3D Auto-Context-Based Locality Adaptive Multi-Modality GANs for PET Synthesis

Abstract: Positron emission tomography (PET) has been substantially used recently. To minimize the potential health risk caused by the tracer radiation inherent to PET scans, it is of great interest to synthesize the high-quality PET image from the low-dose one to reduce the radiation exposure. In this paper, we propose a 3D auto-context-based locality adaptive multi-modality generative adversarial networks model (LA-GANs) to synthesize the high-quality FDG PET image from the low-dose one with the accompanying MRI image… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
86
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 169 publications
(88 citation statements)
references
References 38 publications
2
86
0
Order By: Relevance
“…Wolterink et al [18] utilized GAN to transform low-dose CT into routine-dose CT images. Wang et al [16] also demonstrated promising results when using a GAN to estimate high-dose PET images from low-dose ones. The list of GAN-based methods proposed for medical synthesis is extensive [2], [19], [38]- [40].…”
Section: Related Workmentioning
confidence: 99%
“…Wolterink et al [18] utilized GAN to transform low-dose CT into routine-dose CT images. Wang et al [16] also demonstrated promising results when using a GAN to estimate high-dose PET images from low-dose ones. The list of GAN-based methods proposed for medical synthesis is extensive [2], [19], [38]- [40].…”
Section: Related Workmentioning
confidence: 99%
“…Xiang et al input real low-dose PET (25% dose) and T1 MRI images into a deep autocontext CNN network to predict standard-dose PET images [124]. Similarly, Wang et al utilized a locality adaptive multimodality generative adversarial networks (LA-GANs) to also synthesize high-quality PET images from low-dose PET and T1 MRI images alone [125] or combined with DTI [126]. Meanwhile, Chen et al utilized simultaneously acquired MR images and simulated ultra-low-dose PET images to synthesize full-dose amyloid PET images using an encoder-decoder CNN [127].…”
Section: Image Denoising and Super-resolution Tasksmentioning
confidence: 99%
“…The highlights created by profound learning mirror the human observation through different activities, for example, convolution and pooling, and this makes better component descriptors contrasted with the low-level model. 12 In CNN, the previous architectures, AlexNet, GoogLeNet, and VGG, are some accurate architecture. Among these, AlexNet model has improved inception module, requires less memory utilization and simple calculation, and it is therefore more famous than others.…”
Section: Review Of Literaturementioning
confidence: 99%