The ability of smooth pursuit eye movements to anticipate the future motion of targets has been known since the pioneering work of Dodge, Travis, and Fox (1930) and Westheimer (1954). This article reviews aspects of anticipatory smooth eye movements, focusing on the roles of the different internal or external cues that initiate anticipatory pursuit.We present new results showing that the anticipatory smooth eye movements evoked by different cues differ substantially, even when the cues are equivalent in the information conveyed about the direction of future target motion. Cues that convey an easily interpretable visualization of the motion path produce faster anticipatory smooth eye movements than the other cues tested, including symbols associated arbitrarily with the path, and the same target motion tested repeatedly over a block of trials. The differences among the cues may be understood within a common predictive framework in which the cues differ in the level of subjective certainty they provide about the future path. Pursuit may be driven by a combined signal in which immediate sensory motion, and the predictions about future motion generated by sets of cues, are weighted according to their respective levels of certainty. Anticipatory smooth eye movements, an overt indicator of expectations and predictions, may not be operating in isolation, but may be part of a global process in which the brain analyzes available cues, formulates predictions, and uses them to control perceptual, motor, and cognitive processes.
Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream.
Smooth pursuit eye movements anticipate the future motion of targets when future motion is either signaled by visual cues or inferred from past history. To study the effect of anticipation derived from movement planning, the eye pursued a cursor whose horizontal motion was controlled by the hand via a mouse. The direction of a critical turn was specified by a cue or was freely chosen. Information from planning to move the hand (which itself showed anticipatory effects) elicited anticipatory smooth eye movements, allowing the eye to track self-generated target motion with virtually no lag. Lags were present only when either visual cues or motor cues were removed. The results show that information derived from the planning of movement is as effective as visual cues in generating anticipatory eye movements. Eye movements in dynamic environments will be facilitated by collaborative anticipatory movements of hand and eye. Cues derived from movement planning may be particularly valuable in fast-paced human-computer interactions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.