Contrast thresholds of vertical Gabor patterns were measured as a function of their eccentricity, size, shape, and phase using a 2AFC method. The patterns were 4 c/deg and they were presented for 90 or 240 ms. Log thresholds increase linearly with eccentricity at a mean rate of 0.47 dB/wavelength. For patterns centered on the fovea, thresholds decrease as the area of the pattern increases over the entire standard deviation range of 12 wavelengths. The TvA functions are concave up on log-log coordinates. For small patterns there is an interaction between shape and size that depends on phase. Threshold contrast energy is a U-shaped function of area with a minimum in the vicinity of 0.4 wavelength indicating detection by small receptive fields. Observers can discriminate among patterns of different sizes when the patterns are at threshold indicating that more than one mechanism is involved. The results are accounted for by a model in which patterns excite an array of slightly elongated receptive fields that are identical except that their sensitivity decreases exponentially with eccentricity. Excitation is raised to a power and then summed linearly across receptive fields to determine the threshold. The results are equally well described by an internal-noise-limited model. The TvA functions are insufficient to separately estimate the noise and the exponent of the power function. However, an experiment that shows that mixing sizes within the trial sequence has no effect on thresholds, suggests that the limiting noise does not increase with the number of mechanisms monitored.
The benefits of incorporating saliency maps obtained with visual attention computational models into three image quality metrics are investigated. In particular, comparison is made of the performance of simple quality metrics with quality metrics that incorporate saliency maps obtained using three popular visual attention computational models. Results show that the performance of simple quality metrics can be improved by adding visual attention information. Nevertheless, gains in performance depend on the precision of the visual attention model, the type of distortion, and the characteristics of the quality metric.Introduction: Much effort in the scientific community has been devoted to the development of better image and video quality metrics that incorporate human visual system (HVS) features and, therefore, correlate better with the human perception of quality [2]. A recent development in the area consists of trying to incorporate aspects of visual attention in the design of quality metrics, using the assumption that distortions appearing in less salient areas might be less visible and, therefore, less annoying.Initial studies have reported that the incorporation of subjective saliency maps increases the performance of quality metrics [5]. Subjective saliency maps are obtained through psycho-physical experiments using eye-tracker equipment which records where subjects are fixating as they look at pictures. Although subjective saliency maps are considered as the ground-truth in visual attention, they cannot be used in real-time applications. Thus, to incorporate visual attention aspects into the design of image quality metrics, we have to use visual attention computational models to generate objective saliency maps.Very few works have tested the incorporation of specific computational attention models into image quality metrics [9]. To date, there has been no work that compared the incorporation of visual attention computation models against subjective saliency maps. In this Letter, we investigate the benefit of incorporating objective saliency maps into three image quality metrics. We compare the performance of the original quality metrics with the performance of quality metrics that incorporate objective saliency maps. Also, we study the effects that different types of degradations have on the computational model and, consequently, on the performance of the final metric.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.