Video super-resolution is a challenging task, which has attracted great attention in research and industry communities. In this paper, we propose a novel end-to-end architecture, called Residual Invertible Spatio-Temporal Network (RISTN) for video super-resolution. The RISTN can sufficiently exploit the spatial information from low-resolution to high-resolution, and effectively models the temporal consistency from consecutive video frames. Compared with existing recurrent convolutional network based approaches, RISTN is much deeper but more efficient. It consists of three major components: In the spatial component, a lightweight residual invertible block is designed to reduce information loss during feature transformation and provide robust feature representations. In the temporal component, a novel recurrent convolutional model with residual dense connections is proposed to construct deeper network and avoid feature degradation. In the reconstruction component, a new fusion method based on the sparse strategy is proposed to integrate the spatial and temporal features. Experiments on public benchmark datasets demonstrate that RISTN outperforms the state-ofthe-art methods.
It is well-known that high-frequency information (e.g. textures, edges) is significant for single image super-resolution (SISR). However, Existing of deep Convolutional Neural Network (CNN) based methods directly model mapping function from low resolution (LR) to high resolution (HR), and they treat high-frequency and low-frequency information equally during feature extraction. Therefore, the highfrequency learning mode can not be sufficiently attentive, resulting in inaccurate representation of some local details. In this study, we aim to build potential frequencies' relations and handle high-frequency and low-frequency information differentially. Specifically, we propose a novel Frequency Separation Network (FSN) for image super-resolution (SR). In FSN, a new Octave Convolution (OC) is adopted, which uses four operations to perform information update and frequency communication between high frequency and low frequency features. In addition, global and hierarchical feature fusion are employeed to learn elaborate and comprehensive feature representations, in order to further benefit the quality of final image reconstruction. Extensive experiments conducted on benchmark datasets demonstrate the state-of-the-art performance of our method.
Recently, image super‐resolution works based on Convolutional Neural Networks (CNNs) and Generative Adversarial Nets (GANs) have shown promising performance. However, these methods tend to generate blurry and over‐smoothed super‐resolved (SR) images, due to the incomplete loss function and powerless architectures of networks. In this paper, a novel generative adversarial image super‐resolution through deep dense skip connections (GSR‐DDNet), is proposed to solve the above‐mentioned problems. It aims to take advantage of GAN's ability of modeling data distributions, so that GSR‐DDNet can select informative feature representation and model the mapping across the low‐quality and high‐quality images in an adversarial way. The pipeline of the proposed method consists of three main components: 1) The generator of a novel dense skip connection network with the deep structure for learning robust mapping function is proposed to generate SR images from low‐resolution images; 2) The feature extraction network based on VGG‐19 is adopted to capture high frequency feature maps for content loss; and 3) The discriminator with Wasserstein distance is adopted to identify the overall style of SR and ground‐truth images. Experiments conducted on four publicly available datasets demonstrate the superiority against the state‐of‐the‐art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.