Abstract-The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. In this paper, we developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented here consists of adding the wavelet coefficients of the high-resolution image to the multispectral (lowresolution) data. We have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L = R+G+B 3 ) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. We used the "à trous" algorithm which allows to use a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. We used the method to merge SPOT and LANDSAT (TM) images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.
Abstract-Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models.Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed EC SF ) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the centersurround inhibition windows have been adjusted by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
A new multiresolution wavelet model is presented here, which accounts for brightness assimilation and contrast effects in a unified framework, and includes known psychophysical and physiological attributes of the primate visual system (such as spatial frequency channels, oriented receptive fields, contrast sensitivity function, contrast non-linearities, and a unified set of parameters). Like other low-level models, such as the ODOG model [Blakeslee, B., & McCourt, M. E. (1999). A multiscale spatial filtering account of the white effect, simultaneous brightness contrast and grating induction. Vision Research, 39, 4361-4377], this formulation reproduces visual effects such as simultaneous contrast, the White effect, grating induction, the Todorović effect, Mach bands, the Chevreul effect and the Adelson-Logvinenko tile effects, but it also reproduces other previously unexplained effects such as the dungeon illusion, all using a single set of parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.