Spatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Due to the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combined the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI 2D data for evaluation. The experimental results have shown that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.
Alzheimer's disease (AD) is the most common cause of dementia and threatens the health of millions of people. Early stage diagnosis of AD is critical for improving clinical outcomes and longitudinal magnetic resonance imaging (MRI) data collection can be used to monitor the progress of each patient. However, missing data is a common problem in longitudinal AD studies. The main factors come from subject dropouts and failed scans. This hinders the acquisition of longitudinal sequences that consist of multi-time-point magnetic resonance (MR) images at relatively uniform intervals. In this paper, we present a generative adversarial convolutional network to predict missing structural MRI data. In particular, we include multiple MRI scans as a temporal sequence collected at different times and determine the spatio-temporal relationship between the different scans in the proposed network. We adopt residual bottlenecks in the generator to decrease parameter values and deepen the network. In order to make full use of the longitudinal information, our discriminator classifies not only real MR images from generated MR images, but also fake sequences from real sequences in which the longitudinal MR images for all time points come from the dataset, only the last MR image comes from the generator. Results of our experiment show that our method performs more accurately for the longitudinal structural MRI data prediction of a brain afflicted with AD.
Feature extraction is a key step in hyperspectral image change detection. However, many targets with great various sizes, such as narrow paths, wide rivers, and large tracts of cultivated land, can appear in a satellite remote sensing image at the same time, which will increase the difficulty of feature extraction. In addition, the phenomenon that the number of changed pixels is much less than unchanged pixels will lead to class imbalance and affect the accuracy of change detection. To address the above issues, based on the U-Net model, we propose an adaptive convolution kernel structure to replace the original convolution operations and design a weight loss function in the training stage. The adaptive convolution kernel contains two various kernel sizes and can automatically generate their corresponding weight feature map during training. Each output pixel obtains the corresponding convolution kernel combination according to the weight. This structure of automatically selecting the size of the convolution kernel can effectively adapt to different sizes of targets and extract multi-scale spatial features. The modified cross-entropy loss function solves the problem of class imbalance by increasing the weight of changed pixels. Study results on four datasets indicate that the proposed method performs better than most existing methods.
Spatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Given that the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combine the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI data for evaluation. The experimental results show that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.