We present a novel, computationally efficient, iterative, spatial gamut mapping algorithm. The proposed algorithm offers a compromise between the colorimetrically optimal gamut clipping and the most successful spatial methods. This is achieved by the iterative nature of the method. At iteration level zero, the result is identical to gamut clipping. The more we iterate the more we approach an optimal, spatial, gamut mapping result. Optimal is defined as a gamut mapping algorithm that preserves the hue of the image colours as well as the spatial ratios at all scales. Our results show that as few as five iterations are sufficient to produce an output that is as good or better than that achieved in previous, computationally more expensive, methods. Being able to improve upon previous results using such low number of iterations allows us to state that the proposed algorithm is O(N), N being the number of pixels. Results based on a challenging small destination gamut supports our claims that it is indeed efficient.
The set of metamers for a given device response can be calculated given the device's spectral sensitivities. Knowledge of the metamer set has been useful in practical applications such as color correction and reflectance recovery. Unfortunately, the device sensitivities of a camera or scanner are not known, and they are difficult to estimate reliably outside the laboratory. We show how metamer sets can be calculated when a device's spectral sensitivities are not known. The result is built on two observations: first, the set of all reflectance spectra consists of convex combinations of certain basic colors that tend to be very bright (or dark) and have high chroma; second, the convex combinations that describe reflectance spectra result in convex combinations of red-green-blue (RGB) values. Thus, given an RGB value, it is possible to find the set of convex combinations of the RGB values of the basic colors that generate the same RGB value. The corresponding set of convex combinations of the basic spectra is the metamer set.
A salient feature is a part of the scene that stands out relative to neighboring items. By that we mean that a human observer would experience a salient feature as being more prominent. It is, however, important to quantify saliency in terms of a mathematical quantity that lends itself to measurements. Different metrics have been shown to correlate with human fixations data. These include contrast, brightness and orienting gradients calculated at different image scales. In this paper, we show that these metrics can be grouped under transformations pertaining to the dihedral group D4, which is the symmetry group of the square image grid. Our results show that salient features can be defined as the image features that are most asymmetric in their surrounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.