The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists' judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.
Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.
Accurate estimation of the satellite-based global terrestrial latent heat flux (LE) at high spatial and temporal scales remains a major challenge. In this study, we introduce a Bayesian model averaging (BMA) method to improve satellite-based global terrestrial LE estimation by merging five process-based algorithms. These are the Moderate Resolution Imaging Spectroradiometer (MODIS) LE product algorithm, the revised remote-sensing-based Penman-Monteith LE algorithm, the Priestley-Taylor-based LE algorithm, the modified satellite-based Priestley-Taylor LE algorithm, and the semi-empirical Penman LE algorithm. We validated the BMA method using data for 2000-2009 and by comparison with a simple model averaging (SA) method and five process-based algorithms. Validation data were collected for 240 globally distributed eddy covariance tower sites provided by FLUXNET projects. The validation results demonstrate that the five process-based algorithms used have variable uncertainty and the BMA method enhances the daily LE estimates, with smaller root mean square errors (RMSEs) than the SA method and the individual algorithms driven by tower-specific meteorology and Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological data provided by the NASA Global Modeling and Assimilation Office (GMAO), respectively. The average RMSE for the BMA method driven by daily tower-specific meteorology decreased by more than 5 W/m 2 for crop and grass sites, and by more than 6 W/m 2 for forest, shrub, and savanna sites. The average coefficients of determination (R 2 ) increased by approximately 0.05 for most sites. To test the BMA method for regional mapping, we applied it for MODIS data and GMAO-MERRA meteorology to map annual global terrestrial LE averaged over 2001-2004 for spatial resolution of 0.05°. The BMA method provides a basis for generating a long-term global terrestrial LE product for characterizing global energy, hydrological, and carbon cycles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.