Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-toend manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.
Purpose: To obtain quantitative measures of human body fat compartments from whole body MR datasets for the risk estimation in subjects prone to metabolic diseases without the need of any user interaction or expert knowledge.Materials and Methods: Sets of axial T1-weighted spinecho images of the whole body were acquired. The images were segmented using a modified fuzzy c-means algorithm. A separation of the body into anatomic regions along the body axis was performed to define regions with visceral adipose tissue present, and to standardize the results. In abdominal image slices, the adipose tissue compartments were divided into subcutaneous and visceral compartments using an extended snake algorithm. The slice-wise areas of different tissues were plotted along the slice position to obtain topographic fat tissue distributions. Results:Results from automatic segmentation were compared with manual segmentation. Relatively low mean deviations were obtained for the class of total tissue (4.48%) and visceral adipose tissue (3.26%). The deviation of total adipose tissue was slightly higher (8.71%). Conclusion:The proposed algorithm enables the reliable and completely automatic creation of adipose tissue distribution profiles of the whole body from multislice MR datasets, reducing whole examination and analysis time to less than half an hour.
Purpose Motion is 1 extrinsic source for imaging artifacts in MRI that can strongly deteriorate image quality and, thus, impair diagnostic accuracy. In addition to involuntary physiological motion such as respiration and cardiac motion, intended and accidental patient movements can occur. Any impairment by motion artifacts can reduce the reliability and precision of the diagnosis and a motion‐free reacquisition can become time‐ and cost‐intensive. Numerous motion correction strategies have been proposed to reduce or prevent motion artifacts. These methods have in common that they need to be applied during the actual measurement procedure with a‐priori knowledge about the expected motion type and appearance. For retrospective motion correction and without the existence of any a‐priori knowledge, this problem is still challenging. Methods We propose the use of deep learning frameworks to perform retrospective motion correction in a reference‐free setting by learning from pairs of motion‐free and motion‐affected images. For this image‐to‐image translation problem, we propose and compare a variational auto encoder and generative adversarial network. Feasibility and influences of motion type and optimal architecture are investigated by blinded subjective image quality assessment and by quantitative image similarity metrics. Results We observed that generative adversarial network‐based motion correction is feasible producing near‐realistic motion‐free images as confirmed by blinded subjective image quality assessment. Generative adversarial network‐based motion correction accordingly resulted in images with high evaluation metrics (normalized root mean squared error <0.08, structural similarity index >0.8, normalized mutual information >0.9). Conclusion Deep learning‐based retrospective restoration of motion artifacts is feasible resulting in near‐realistic motion‐free images. However, the image translation task can alter or hide anatomical features and, therefore, the clinical applicability of this technique has to be evaluated in future studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.