2018
DOI: 10.18383/j.tom.2018.00042
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Networks for the Creation of Realistic Artificial Brain Magnetic Resonance Images

Abstract: Even as medical data sets become more publicly accessible, most are restricted to specific medical conditions. Thus, data collection for machine learning approaches remains challenging, and synthetic data augmentation, such as generative adversarial networks (GAN), may overcome this hurdle. In the present quality control study, deep convolutional GAN (DCGAN)–based human brain magnetic resonance (MR) images were validated by blinded radiologists. In total, 96 T1-weighted brain images from 30 healthy individuals… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
55
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(55 citation statements)
references
References 22 publications
0
55
0
Order By: Relevance
“…Because the field is moving so fast, it needs to be pointed out that unlike in 3D GANs, using, for example, convolutional deep belief networks (Wu et al, 2015), we are dealing with fully texturized 3D image stacks and not only 3D binary shapes. Also, although the use of GANs in biomedical imaging is rapidly advancing, for example, for synthesizing artificial brain magnetic resonance images (Han et al, 2018;Kazuhiro et al, 2018) or thyroid tissue imaged by optical coherence tomography (Zhang et al, 2018), 3D applications like GANs for segmentation of liver CT scans (Yang et al, 2017) are still rare. Creating synthetic 3D test data, above all, demands accurately segmented and validated ground truth data for training.…”
Section: Discussionmentioning
confidence: 99%
“…Because the field is moving so fast, it needs to be pointed out that unlike in 3D GANs, using, for example, convolutional deep belief networks (Wu et al, 2015), we are dealing with fully texturized 3D image stacks and not only 3D binary shapes. Also, although the use of GANs in biomedical imaging is rapidly advancing, for example, for synthesizing artificial brain magnetic resonance images (Han et al, 2018;Kazuhiro et al, 2018) or thyroid tissue imaged by optical coherence tomography (Zhang et al, 2018), 3D applications like GANs for segmentation of liver CT scans (Yang et al, 2017) are still rare. Creating synthetic 3D test data, above all, demands accurately segmented and validated ground truth data for training.…”
Section: Discussionmentioning
confidence: 99%
“…We showed that the GAN-based synthetic whole-body CT images have comparable image fidelity to real images. For this, our study validated the synthetic images by multiple radiology experts because the visual Turing test could be greatly influenced by the reader’s level of expertise [ 10 , 32 , 33 ]. There was no significant difference in the accuracy between the reader groups.…”
Section: Discussionmentioning
confidence: 99%
“…The most common obviously artificial features in synthetic images were bizarre configurations and directions of small-to-medium vessels. This is probably due to the lack of the interslice shape continuity caused by the 2D CT image–training and the anatomical diversity of these vessels [ 10 , 35 ]. Therefore, to overcome these limitations, further work would require the generation of 3D CT images with larger and more diverse data sets.…”
Section: Discussionmentioning
confidence: 99%
“…In CNN, however, kernels are not predefined but are trained to perform a specific task from raw data. Kernels that are determined as the result of training 1], [2], [3], [4]], [ [5], [6], [7], [8]], [ [9], [10], [11], [12]], [ [13], [14], [15], [16]]]], dtype=np.float32) kjronline.org are applied to the input images, then various feature maps at different levels are produced in the CNN. Third, CNN is more efficient for a completely new task because an already trained CNN (with trained parameters from another task, a concept that will be explained in more detail in transfer learning) can be slightly tuned for the new task.…”
Section: Cnnmentioning
confidence: 99%