This paper presents a novel example-based single-image superresolution procedure that upscales to high-resolution (HR) a given low-resolution (LR) input image without relying on an external dictionary of image examples. The dictionary instead is built from the LR input image itself, by generating a double pyramid of recursively scaled, and subsequently interpolated, images, from which self-examples are extracted. The upscaling procedure is multipass, i.e., the output image is constructed by means of gradual increases, and consists in learning special linear mapping functions on this double pyramid, as many as the number of patches in the current image to upscale. More precisely, for each LR patch, similar self-examples are found, and, because of them, a linear function is learned to directly map it into its HR version. Iterative back projection is also employed to ensure consistency at each pass of the procedure. Extensive experiments and comparisons with other state-of-the-art methods, based both on external and internal dictionaries, show that our algorithm can produce visually pleasant upscalings, with sharp edges and well reconstructed details. Moreover, when considering objective metrics, such as Peak signal-to-noise ratio and Structural similarity, our method turns out to give the best performance.
This paper describes a novel method for single-image superresolution (SR) based on a neighbor embedding technique which uses Semi-Nonnegative Matrix Factorization (SNMF). Each low-resolution (LR) input patch is approximated by a linear combination of nearest neighbors taken from a dictionary. This dictionary stores low-resolution and corresponding high-resolution (HR) patches taken from natural images and is thus used to infer the HR details of the super-resolved image. The entire neighbor embedding procedure is carried out in a feature space. Features which are either the gradient values of the pixels or the mean-subtracted luminance values are extracted from the LR input patches, and from the LR and HR patches stored in the dictionary. The algorithm thus searches for the K nearest neighbors of the feature vector of the LR input patch and then computes the weights for approximating the input feature vector. The use of SNMF for computing the weights of the linear approximation is shown to have a more stable behavior than the use of LLE and lead to significantly higher PSNR values for the super-resolved images.
International audienceThe problem of constellation shaping for broadcast transmission in degraded channels remains a challenge. This is especially so when a single source communicates simultaneously with two receivers using a finite dimension constellation. This paper focuses on a practical situation where unicast service to each user is transmitted over broadcast channels. We investigate the optimization of an achievable rate closure region by using non-uniform constellations issued from superimposition of high-rate information on low-rate information and by using a nonequiprobable distribution of the transmitted symbols. The achievable rate region is derived for a two-user additive white Gaussian noise (AWGN) broadcast channel and for finite input pulse amplitude modulation (PAM) constellations. A noticeable shaping gain up to 3.5 dB maximum was shown on signal-to-noise ratio (SNR), compared with the equiprobable distribution of transmitted symbols obtained for a 4-PAM constellation when achievable rates are maximized over the probability distribution of channel input signals and the constellation shape
This paper presents a new method to construct a dictionary for example-based super-resolution (SR) algorithms. Example-based SR relies on a dictionary of correspondences of low-resolution (LR) and high-resolution (HR) patches. Having a fixed, prebuilt, dictionary, allows to speed up the SR process; however, in order to perform well in most cases, we need to have big dictionaries with a large variety of patches. Moreover, LR and HR patches often are not coherent, i.e. local LR neighborhoods are not preserved in the HR space. Our designed dictionary learning method takes as input a large dictionary and gives as an output a dictionary with a "sustainable" size, yet presenting comparable or even better performance. It firstly consists of a partitioning process, done according to a joint k-means procedure, which enforces the coherence between LR and HR patches by discarding those pairs for which we do not find a common cluster. Secondly, the clustered dictionary is used to extract some salient patches that will form the output set.
This paper presents a new nonnegative dictionary learning method, to decompose an input data matrix into a dictionary of nonnegative atoms, and a representation matrix with a strict 0-sparsity constraint. This constraint makes each input vector representable by a limited combination of atoms. The proposed method consists of two steps which are alternatively iterated: a sparse coding and a dictionary update stage. As for the dictionary update, an original method is proposed, which we call K-WEB, as it involves the computation of k WEighted Barycenters. The so designed algorithm is shown to outperform other methods in the literature that address the same learning problem, in different applications, and both with synthetic and "real" data, i.e. coming from natural images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.