Visual backward masking not only is an empirically rich and theoretically interesting phenomenon but also has found increasing application as a powerful methodological tool in studies of visual information processing and as a useful instrument for investigating visual function in a variety of specific subject populations. Since the dual-channel, sustained-transient approach to visual masking was introduced about two decades ago, several new models of backward masking and metacontrast have been proposed as alternative approaches to visual masking. In this article, we outline, review, and evaluate three such approaches: an extension of the dual-channel approach as realized in the neural network model of retino-cortical dynamics (Ogmen, 1993), the perceptual retouch theory (Bachmann, 1984(Bachmann, , 1994, and the boundary contour system (Francis, 1997;Grossberg & Mingolla, 1985b). Recent psychophysical and electrophysiological findings relevant to backward masking are reviewed and, whenever possible, are related to the aforementioned models. Besides noting the positive aspects of these models, we also list their problems and suggest changes that may improve them and experiments that can empirically test them.Visual masking occurs whenever the visibility of one stimulus, called the target, is reduced by the presence of another stimulus, designated as the mask. Visual masking has been, and continues to be, a powerful psychophysical tool for investigating the steady-state properties of spatial-processing mechanisms
NATURE | VOL 396 | 3 DECEMBER 1998 | www.nature.com that of the strobed segment (d s ) remains constant. The latency-difference hypothesis therefore predicts that the observed spatial lead of the moving central segment should increase.To test this prediction, we measured the spatial lead of the moving central segment as a function of the detectability of the central segment while keeping the detectability of the strobed segments constant. Here we use detectability to refer to the number of log units of luminance (Lu) above the detection threshold; detectability of the strobed segments was 0.3 Lu for subjects S.S.P. and G.P., and 0.5 Lu for T.L.N. The temporal lead of the moving central segment averaged across subjects increases systematically from 20 to 70 ms when its detectability increases by 1.0 Lu (Fig. 1b).Increasing the luminance of the strobed segments while keeping that of the moving central segment constant should decrease d s , while d m remains constant. The latencydifference hypothesis predicts that the observed spatial lead of the moving central segment should decrease and, if the luminance of the strobed segments is high enough, the moving central segment should be perceived to lag behind spatially. We tested this prediction by measuring spatial lead as a function of the detectability of the strobed segments, while keeping the detectability of the moving central segment constant (1.5 Lu above the detection threshold for subjects G.P. and T.L.N., and 0.8 Lu for S.S.P.). The observed temporal lead of the moving central segment averaged across subjects decreases systematically from 80 to ǁ30 ms as the detectability of the strobed segments increases by 1.5 to 2.0 Lu (Fig. 1c).These results support predictions of the latency-difference hypothesis and show that the motion-extrapolation mechanism does not compensate for stimulus-dependent variations in latency. Indeed, theoretical calculations show that the putative motionextrapolation mechanism must be undercompensating by at least 120 ms to account for the data in Fig. 1. But a motion-extrapolation mechanism that does not adequately compensate for variations in visual latency would not appreciably improve the accuracy of real-time visually guided behaviour.
How features are attributed to objects is one of the most puzzling issues in the neurosciences. A deeply entrenched view is that features are perceived at the locations where they are presented. Here, we show that features in motion displays can be systematically attributed from one location to another although the elements possessing the features are invisible. Furthermore, features can be integrated across locations. Feature mislocalizations are usually treated as errors and limits of the visual system. On the contrary, we show that the nonretinotopic feature attributions, reported herein, follow rules of grouping precisely suggesting that they reflect a fundamental computational strategy and not errors of visual processing.
No abstract
In human vision, the optics of the eye map neighboring points of the environment onto neighboring photoreceptors in the retina. This retinotopic encoding principle is preserved in the early visual areas. Under normal viewing conditions, due to the motion of objects and to eye movements, the retinotopic representation of the environment undergoes fast and drastic shifts. Yet, perceptually our environment appears stable suggesting the existence of non-retinotopic representations in addition to the well-known retinotopic ones. Here, we present a simple psychophysical test to determine whether a given visual process is accomplished in retino- or non-retinotopic coordinates. As examples, we show that visual search and motion perception can occur within a non-retinotopic frame of reference. These findings suggest that more mechanisms than previously thought operate non-retinotopically. Whether this is true for a given visual process can easily be found out with our “litmus test.”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.