Thin-slice computed tomography (CT) examination plays an important role in the screening of suspected and confirmed coronavirus disease 2019 (COVID-19) outbreak patients. Therefore, improving the image resolution of COVID-19 CT has important clinical value for the diagnosis and condition assessment of COVID-19. However, the existing single-image super-resolution (SISR) methods mainly increase the receptive field of convolution kernels by deepening and widening the network structure, and adopt the equal processing methods in the airspace and channel domains with different importance, and a large number of computing resources will be wasted on the unimportant features. We propose a progressive U-Net residual network (PURN) for COVID-19 CT images super-resolution (SR) to solve the practicality of existing models, to better extract features, and reduce the number of parameters. First, we design a dual U-Net module (DUM), which can efficiently extract low-resolution (LR) COVID-19 CT images feature. Second, the DUM module first performs up-block three times, and then downblocks three times in order to learn the interdependence between high-resolution (HR) and LR images more efficiently. Finally, the local skip connection structure is introduced in the DUM module, and the global long skip connection structure is introduced in the reconstruction layer to further enrich the flow of reconstructed HR image information. Experimental results show that our algorithm effectively improves the SR reconstruction effect of COVID-19 CT images, restores its detailed features more sharply, and greatly improves the practicability of the algorithm.
The deep neural network has achieved good results in medical image superresolution. However, due to the medical equipment limitations and the complexity of the human body structure, it is difficult to reconstruct clear cardiac magnetic resonance (CMR) superresolution images. To reconstruct clearer CMR images, we propose a CMR image superresolution (SR) algorithm based on multichannel residual attention networks (MCRN), which uses the idea of residual learning to alleviate the difficulty of training and fully explore the feature information of the image and uses the back-projection learning mechanism to learn the interdependence between high-resolution images and low-resolution images. Furthermore, the MCRN model introduces an attention mechanism to dynamically allocate each feature map with different attention resources to discover more high-frequency information and learn the dependency between each channel of the feature map. Extensive benchmark evaluation shows that compared with state-of-the-art image SR methods, our MCRN algorithm not only improves the objective index significantly but also provides richer texture information for the reconstructed CMR images, and our MCRN algorithm is better than the Bicubic algorithm in evaluating the information entropy and average gradient of the reconstructed image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.