The video super-resolution (VSR) task aims to restore a high-resolution video frame by using its corresponding low-resolution frame and multiple neighboring frames. At present, many deep learning-based VSR methods rely on optical flow to perform frame alignment. The final recovery results will be greatly affected by the accuracy of optical flow. However, optical flow estimation cannot be completely accurate, and there are always some errors. In this paper, we propose a novel deformable non-local network (DNLN) which is non-flow-based. Specifically, we apply the improved deformable convolution in our alignment module to achieve adaptive frame alignment at the feature level. Furthermore, we utilize a non-local module to capture the global correlation between the reference frame and aligned neighboring frame, and simultaneously enhance desired fine details in the aligned frame. To reconstruct the final high-quality HR video frames, we use residual in residual dense blocks to take full advantage of the hierarchical features. Experimental results on several datasets demonstrate that the proposed DNLN can achieve state of the art performance on video super-resolution task.
Single image super-resolution is known to be an ill-posed problem, which has been studied for decades. With the developments of deep convolutional neural networks, the CNN-based single image super-resolution methods have greatly improved the quality of the generated high-resolution images. However, it is difficult for image super-resolution to make full use of the relationship between pixels in low-resolution images. To address this issue, we propose a novel multi-scale residual hierarchical dense network, which tries to find the dependencies in multi-level and multi-scale features. Especially, we apply the atrous spatial pyramid pooling, which concatenates multiple atrous convolutions with different dilation rates, and design a residual hierarchical dense structure for single image super-resolution. The atrous-spatialpyramid-pooling module is used for learning the relationship of features at multiple scales while the residual hierarchical dense structure, which consists of several hierarchical dense blocks with skip connections, aims to adaptively detect key information from multi-level features. Meanwhile, dense features from different groups are connected in a dense approach by hierarchical dense blocks, which can adequately extract local multi-level features. The extensive experiments on benchmark datasets illustrate the superiority of our proposed method compared with the state-of-the-art methods. The super-resolution results on benchmark datasets of our method can be downloaded from https://github.com/Rainyfish/MS-RHDN, and the source code will be released upon acceptance of the paper.INDEX TERMS Convolutional neural networks, deep learning, multi-scale residual hierarchical dense, image super-resolution.
Image super-resolution (SR) has extensive applications in surveillance systems, satellite imaging, medical imaging, and ultra-high definition display devices. The state-ofthe-art methods for SR still incur considerable running time. In this paper, we propose a novel approach based on Hadamard pattern and tree search structure in order to reduce the running time significantly. In this approach, LR (low-resolution)-HR (high-resolution) training patch pairs are classified into different classes based on the Hadamard patterns generated from the LR training patches. The mapping relationship between the LR space and the HR space for each class is then learned and used for SR. Experimental results show that the proposed method can achieve comparable accuracy as state-of-the-art methods with much faster running speed. The dataset, pretrained models and source code can be accessed at this URL † .
With the help of deep convolutional neural networks, a vast majority of single image superresolution (SISR) methods have been developed, and achieved promising performance. However, these methods suffer from over-smoothness in textured regions due to utilizing a single-resolution network to reconstruct both the low-frequency and high-frequency information simultaneously. To overcome this problem, we propose a Multi-resolution space-Attended Residual Dense Network (MARDN) to separate lowfrequency and high-frequency information for reconstructing high-quality super-resolved images. Specifically, we start from a low-resolution sub-network, and add low-to-high resolution sub-networks step by step in several stages. These sub-networks with different depth and resolution are utilized to produce feature maps of different frequencies in parallel. For instance, the high-resolution sub-network with fewer stages is applied to local high-frequency textured information extraction, while the low-resolution one with more stages is devoted to generating global low-frequency information. Furthermore, the fusion block with channelwise sub-network attention is proposed for adaptively fusing the feature maps from different sub-networks instead of applying concatenation and 1 × 1 convolution. A series of ablation investigations and model analyses validate the effectiveness and efficiency of our MARDN. Extensive experiments on benchmark datasets demonstrate the superiority of the proposed MARDN against the state-of-the-art methods. Our super-resolution results and the source code can be downloaded from https://github.com/Periter/MARDN.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.