We studied the role of attention and task demands for implicit change detection. Subjects engaged in an object sorting task performed in a virtual reality environment, where we changed the properties of an object while the subject was manipulating it. The task assures that subjects are looking at the changed object immediately before and after the change. Our results demonstrate that in this situation subjects' ability to notice changes to the object strongly depends on momentary task demands. Surprisingly, frequent noticing is not guaranteed by task relevance of the changed object attribute per se, but the changed object attribute needs to be task relevant at exactly the right times. Also, the simplicity of the used objects indicates that change blindness occurs in situations where the visual short term memory load is minimal, suggesting a potential dissociation between short term memory limitations and change blindness. Finally, we found that changes may even go unnoticed if subjects are visually tracking the object at the moment of change. Our experiments suggest a highly purposive and task specific nature of human vision, where information extracted from the fixation point is used for certain computations only "just in time" when needed to solve the current goal.
During performance of natural tasks subjects sometimes fixate objects that are manipulated several seconds later. Such early looks are known as "look-ahead fixations" (Pelz and Canosa in Vision Res 41(25-26):3587-3596, 2001). To date, little is known about their function. To investigate the possible role of these fixations, we measured fixation patterns in a model-building task. Subjects assembled models in two sequences where reaching and grasping were interrupted in one sequence by an additional action. Results show look-ahead fixations prior to 20% of the reaching and grasping movements, occurring on average 3 s before the reach. Their frequency was influenced by task sequence, suggesting that they are purposeful and have a role in task planning. To see if look-aheads influenced the subsequent eye movement during the reach, we measured eye-hand latencies and found they increased by 122 ms following a look-ahead to the target. The initial saccades to the target that accompanied a reach were also more accurate following a look-ahead. These results demonstrate that look-aheads influence subsequent visuo-motor coordination, and imply that visual information on the temporal and spatial structure of the scene was retained across intervening fixations and influenced subsequent movement programming. Additionally, head movements that accompanied look-aheads were significantly smaller in amplitude (by 10 degrees) than those that accompanied reaches to the same locations, supporting previous evidence that head movements play a role in the control of hand movements. This study provides evidence of the anticipatory use of gaze in acquiring information about objects for future manipulation.
Attention and working memory limitations set strict limits on visual representations, yet researchers have little appreciation of how these limits constrain the acquisition of information in ongoing visually guided behavior. Subjects performed a brick sorting task in a virtual environment. A change was made to 1 of the features of the brick being held on about 10% of trials. Rates of change detection for feature changes were generally low and depended on the pick-up and put-down relevance of the feature to the sorting task. Subjects' sorting decision suggests that changes may be missed because of a failure to update the changed feature. The authors also explore how hand and eye behavior are coordinated for strategic acquisition and storage of visual information throughout the task.
In natural behavior, fixation patterns are tightly linked to the ongoing task. However, a critical problem for task-driven systems is dealing with unexpected stimuli. We studied the effect of unexpected potential collisions with pedestrians on the distribution of gaze of subjects walking in a virtual environment. Pedestrians on a collision course with the subject were surprisingly ineffective in evoking fixations, especially when subjects were additionally occupied with another task, suggesting that potential collisions do not automatically attract attention. However, prior fixations on pedestrians did increase collision detection performance. Additionally, the detection of potential collisions led to a short-term change in the strategy of looking at subsequent pedestrians. The overall pattern of results is consistent with the hypothesis that subjects typically rely on mechanisms that are initiated top-down to detect unexpected events such as potential collisions. For this to be effective, subjects must learn an appropriate schedule for initiating search through experience with the probabilities of environmental events.
Eye movements during natural tasks are well coordinated with ongoing task demands and many variables could influence gaze strategies. Sprague and Ballard (2003) proposed a gaze-scheduling model that uses a utility-weighted uncertainty metric to prioritize fixations on task-relevant objects and predicted that human gaze should be influenced by both reward structure and task-relevant uncertainties. To test this conjecture, we tracked the eye movements of participants in a simulated driving task where uncertainty and implicit reward (via task priority) were varied. Participants were instructed to simultaneously perform a Follow Task where they followed a lead car at a specific distance and a Speed Task where they drove at an exact speed. We varied implicit reward by instructing the participants to emphasize one task over the other and varied uncertainty in the Speed Task with the presence or absence of uniform noise added to the car's velocity. Subjects' gaze data were classified for the image content near fixation and segmented into looks. Gaze measures, including look proportion, duration and interlook interval, showed that drivers more closely monitor the speedometer if it had a high level of uncertainty, but only if it was also associated with high task priority or implicit reward. The interaction observed appears to be an example of a simple mechanism whereby the reduction of visual uncertainty is gated by behavioral relevance. This lends qualitative support for the primary variables controlling gaze allocation proposed in the Sprague and Ballard model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.