Purpose: In the clinic, computed tomography (CT) has evolved into an essential modality for diagnostic imaging by multidetector row CT (MDCT) and image guided intervention by cone beam CT (CBCT). Recognizing the increasing importance of axial MDCT/CBCT in clinical and preclinical applications, and the existence of CB artifacts in MDCT/CBCT images, we provide a review of CB artifacts' root causes, rendering mechanisms and morphology, and possible solutions for elimination and/or reduction of the artifacts. Methods: By examining the null space in Radon and Fourier domain, the root cause of CB artifacts (i.e., data insufficiency) in axial MDCT/CBCT is analytically investigated, followed by a review of the data sufficiency conditions and the "circle +" source trajectories. The rendering mechanisms and morphology of CB artifacts in axial MDCT/CBCT and their special cases (e.g., half/short scan and full scan with latitudinally displaced detector) are then analyzed, followed by a survey of the potential solutions to suppress the artifacts. The phenomenon of imaged zone indention and its variation over FBP, BPF/DBPF, two-pass and iterative CB reconstruction algorithms and/or schemes are discussed in detail.Results: An interdomain examination of the null space provides an insightful understanding of the root cause of CB artifacts in axial MDCT/CBCT. The decomposition of CB artifacts rendering mechanisms facilitates understanding of the artifacts' behavior under different conditions and the potential solutions to suppress them. An inspection of the imaged zone intention phenomenon provides guidance on the design and implementation of CB image reconstruction algorithms and schemes for CB artifacts suppression in axial MDCT/CBCT. Conclusions: With increasing importance of axial MDCT/CBCT in clinical and preclinical applications, this review article can update the community with in-depth information and clarification on the latest progress in dealing with CB artifacts and thus increase clinical/preclinical confidence.
Purpose: Gadolinium-based contrast agents (GBCAs) are widely administrated in MR imaging for diagnostic studies and treatment planning. Although GBCAs are generally thought to be safe, various health and environmental concerns have been raised recently about their use in MR imaging. The purpose of this work is to derive synthetic contrast enhance MR images from unenhanced counterpart images, thereby eliminating the need for GBCAs, using a cascade deep learning workflow that incorporates contour information into the network. Methods and materials: The proposed workflow consists of two sequential networks: (1) a retina U-Net, which is first trained to derive semantic features from the non-contrast MR images in representing the tumor regions; and (2) a synthesis module, which is trained after the retina U-Net to take the concatenation of the semantic feature maps and non-contrast MR image as input and to generate the synthetic contrast enhanced MR images. After network training, only the non-contrast enhanced MR images are required for the input in the proposed workflow. The MR images of 369 patients from the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were used in this study to evaluate the proposed workflow for synthesizing contrast enhanced MR images (200 patients for five-fold cross-validation and 169 patients for holdout test). Quantitative evaluations were conducted by calculating the normalized mean absolute error (NMAE), structural similarity index measurement (SSIM), and Pearson correlation coefficient (PCC). The original contrast enhanced MR images were considered as the ground truth in this analysis. Results: The proposed cascade deep learning workflow synthesized contrast enhanced MR images that are not visually differentiable from the ground truth with and without supervision of the tumor contours during the network training. Difference images and profiles of the synthetic contrast enhanced MR images revealed that intensity differences could be observed in the tumor region if the contour information was not incorporated in network training. Among the holdout test patients, mean values and standard deviations of the NMAE, SSIM, and PCC were 0.063±0.022, 0.991±0.007 and 0.995±0.006, respectively, for the whole brain; and were 0.050±0.025, 0.993±0.008 and 0.999±0.003, respectively, for the tumor contour regions. Quantitative evaluations with five-fold crossvalidation and hold-out test showed that the calculated metrics can be significantly enhanced (p-values ≤ 0.002) with the tumor contour supervision in network training. Conclusion:The proposed workflow was able to generate synthetic contrast enhanced MR images that closely resemble the ground truth images from non-contrast enhanced MR images when the network training included tumor 3278
The common practice in acquiring the magnetic resonance (MR) images is to obtain two-dimensional (2D) slices at coarse locations while keeping the high in-plane resolution in order to ensure enough body coverage while shortening the MR scan time. The aim of this study is to propose a novel method to generate HR MR images from low-resolution MR images along the longitudinal direction. In order to address the difficulty of collecting paired low-and highresolution MR images in clinical settings and to gain the advantage of parallel cycle consistent generative adversarial networks (CycleGANs) in synthesizing realistic medical images, we developed a parallel CycleGANs based method using a self -supervised strategy. Methods and materials:The proposed workflow consists of two parallely trained CycleGANs to independently predict the HR MR images in the two planes along the directions that are orthogonal to the longitudinal MR scan direction. Then, the final synthetic HR MR images are generated by fusing the two predicted images. MR images, including T1-weighted (T1), contrast enhanced T1-weighted (T1CE), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR), of the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were processed to evaluate the proposed workflow along the cranial-caudal (CC), lateral, and anterior-posterior directions. Institutional collected MR images were also processed for evaluation of the proposed method. The performance of the proposed method was investigated via both qualitative and quantitative evaluations. Metrics of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), edge keeping index (EKI), structural similarity index measurement (SSIM), information fidelity criterion (IFC), and visual information fidelity in pixel domain (VIFP) were calculated. Results: It is shown that the proposed method can generate HR MR images visually indistinguishable from the ground truth in the investigations on the BraTS2020 dataset. In addition, the intensity profiles, difference images and SSIM maps can also confirm the feasibility of the proposed method for synthesizing HR MR images. Quantitative evaluations on the BraTS2020 dataset shows that the calculated metrics of synthetic HR MR images can all be enhanced for the T1, T1CE, T2, and FLAIR images. The enhancements in the numerical metrics over the low-resolution and bi-cubic interpolated MR images, as well as those genearted with a comparative deep learning method, are statistically significant. Qualitative evaluation of the synthetic HR MR images of
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.