Eye movements are an integral and essential part of our human foveated vision system. Here, we review recent work on voluntary eye movements, with an emphasis on the last decade. More selectively, we address two of the most important questions about saccadic and smooth pursuit eye movements in natural vision. First, why do we saccade to where we do? We argue that, like for many other aspects of vision, several different circuits related to salience, object recognition, actions, and value ultimately interact to determine gaze behavior. Second, how are pursuit eye movements and perceptual experience of visual motion related? We show that motion perception and pursuit have a lot in common, but they also have separate noise sources that can lead to dissociations between them. We emphasize the point that pursuit actively modulates visual perception and that it can provide valuable information for motion perception.
People can direct their gaze at a visual target for extended periods of time. Yet, even during fixation the eyes make small, involuntary movements (e.g. tremor, drift, and microsaccades). This can be a problem during experiments that require stable fixation. The shape of a fixation target can be easily manipulated in the context of many experimental paradigms. Thus, from a purely methodological point of view, it would be good to know if there was a particular shape of a fixation target that minimizes involuntary eye movements during fixation, because this shape could then be used in experiments that require stable fixation. Based on this methodological motivation, the current experiments tested if the shape of a fixation target can be used to reduce eye movements during fixation. In two separate experiments subjects directed their gaze at a fixation target for 17s on each trial. The shape of the fixation target varied from trial to trial and was drawn from a set of seven shapes, the use of which has been frequently reported in the literature. To determine stability of fixation we computed spatial dispersion and microsaccade rate. We found that only a target shape which looks like a combination of bulls eye and cross hair resulted in combined low dispersion and microsaccade rate. We recommend the combination of bulls eye and cross hair as fixation target shape for experiments that require stable fixation.
Due to the inhomogenous visual representation across the visual field, humans use peripheral vision to select objects of interest and foveate them by saccadic eye movements for further scrutiny. Thus, there is usually peripheral information available before and foveal information after a saccade. In this study we investigated the integration of information across saccades. We measured reliabilities--i.e., the inverse of variance-separately in a presaccadic peripheral and a postsaccadic foveal orientation--discrimination task. From this, we predicted trans-saccadic performance and compared it to observed values. We show that the integration of incongruent peripheral and foveal information is biased according to their relative reliabilities and that the reliability of the trans-saccadic information equals the sum of the peripheral and foveal reliabilities. Both results are consistent with and indistinguishable from statistically optimal integration according to the maximum-likelihood principle. Additionally, we tracked the gathering of information around the time of the saccade with high temporal precision by using a reverse correlation method. Information gathering starts to decline between 100 and 50 ms before saccade onset and recovers immediately after saccade offset. Altogether, these findings show that the human visual system can effectively use peripheral and foveal information about object features and that visual perception does not simply correspond to disconnected snapshots during each fixation.
Humans shift their gaze to a new location several times per second. It is still unclear what determines where they look next. Fixation behavior is influenced by the low-level salience of the visual stimulus, such as luminance, contrast, and color, but also by highlevel task demands and prior knowledge. Under natural conditions, different sources of information might conflict with each other and have to be combined. In our paradigm, we trade off visual salience against expected value. We show that both salience and value information influence the saccadic end point within an object, but with different time courses. The relative weights of salience and value are not constant but vary from eye movement to eye movement, depending critically on the availability of the value information at the time when the saccade is programmed. Shortlatency saccades are determined mainly by salience, but value information is taken into account for long-latency saccades. We present a model that describes these data by dynamically weighting and integrating detailed topographic maps of visual salience and value. These results support the notion of independent neural pathways for the processing of visual information and value.neuroeconomics | decision-making | cue combination | visual perception B ecause of foveal specialization for high acuity and color vision, humans frequently move their eyes to project different parts of the visual scene on the fovea. Although the basic networks for the programming and execution of saccades have been studied for decades (1, 2), surprisingly little is known about the neural processes that underlie selection of the point of fixation of the next saccade. To some degree, the weighted combination of basic visual-stimulus features can predict saccadic eye movements in natural scenes (3-5). These basic stimulus features are, among others, local differences in luminance, color, or orientation and are combined by the visual system in a bottom-up image-based salience map. However, the salience difference between fixated and nonfixated image locations is typically rather small (6, 7), indicating that the influence of salience may be modulated by other factors. Visual salience, by definition, is determined by features of the visual scene alone and therefore is determined exclusively by visual bottom-up processing. Other factors reflect the influence of top-down processing. Task demands, for example, exhibit constraints on gaze patterns in different activities such as visual searching (8), manipulating an object (9), playing ball sports, preparing a cup of tea (10), and navigating between obstacles (11). In all these examples, gaze is concentrated on objects that are relevant for the task.Along different lines, recent research in neuroeconomics has used saccadic eye movements as a tool to uncover the neural bases of primate choice behavior. The results of these experiments indicate that value can be an important determinant of the neural activity underlying the selection of a saccadic target when one object bear...
Spering M, Schütz AC, Braun DI, Gegenfurtner KR. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.