One category of the superresolution algorithms widely used in practical applications is dictionary-based superresolution algorithms, which constructs a single high-resolution (HR) and high-clarity image from multiple low-resolution (LR) images. Despite the fact that general dictionary-based superresolution algorithms obtain redundant dictionaries from numerous HR-LR images, HR image distortion is unavoidable. To solve this problem, this paper proposes a multiframe superresolution reconstruction based on self-learning methods. First, multiple images from the same scene are selected to be both input and training images, and larger-scale images, which are also involved in the training set, are constructed from the learning dictionary. Then, different larger-scale images are constructed via repetition of the first step and the initial HR sets whose scale closely approximates that of the target HR image are finally obtained. Lastly, initial HR images are fused into one target HR image under the NLM idea, while the IBP idea is adopted to meet the global constraint. The simulation results demonstrate that the proposed algorithm produces more accurate reconstructions than those produced by other general superresolution algorithms, while, in real scene experiments, the proposed algorithm can run well and create clearer HR images from input images captured by cameras.
In this paper, a super-resolution enhancement model based on skip residual dense net(SRDN) is proposed. We design a model with a two-channel skip residual dense nets to extract deeper feature information. The two channels have the same network structure and are connected by skip connections. The model first divide the input image into two components by a guided filtering. The features of two components is learned by one convolution layer and a three layers double-channel SRDN network respectively. Then model uses the concatenation operation to combine two channels' feature. Finally, the super-resolution image is obtained by the upsampling network and one convolution operation. Different to the classical loss function, we define a joint loss functions for training, which is consisted of content loss, perceptual loss and color discrimination loss. The experimental results show that the proposed algorithm achieves better superresolution visual results ans objective evaluation indicators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.