Attention capture is often operationally defined as speeded search performance when an otherwise nonpredictive stimulus happens to be the target of a visual search. That is, if a stimulus captures attention, it should be searched with priority even when it is irrelevant to the task. Given this definition, only the abrupt appearance of a new object (see, e.g., Jonides & Yantis, 1988) and one type of luminance contrast change (Enns, Austen, Di Lollo, Rauschenberger, & Yantis, 2001) have been shown to strongly capture attention. We show that translating and looming stimuli also capture attention. This phenomenon does not occur for all dynamic events: We also show that receding stimuli do not attract attention. Although the sorts of dynamic events that capture attention do not fit neatly into a single category, we speculate that stimuli that signal potentially behaviorally urgent events are more likely to receive attentional priority.
Much of our interaction with the visual world requires us to isolate some currently important objects from other less important objects. This task becomes more difficult when objects move, or when our field of view moves relative to the world, requiring us to track these objects over space and time. Previous experiments have shown that observers can track a maximum of about 4 moving objects. A natural explanation for this capacity limit is that the visual system is architecturally limited to handling a fixed number of objects at once, a so-called magical number 4 on visual attention. In contrast to this view, Experiment 1 shows that tracking capacity is not fixed. At slow speeds it is possible to track up to 8 objects, and yet there are fast speeds at which only a single object can be tracked. Experiment 2 suggests that that the limit on tracking is related to the spatial resolution of attention. These findings suggest that the number of objects that can be tracked is primarily set by a flexibly allocated resource, which has important implications for the mechanisms of object tracking and for the relationship between object tracking and other cognitive processes.
When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
The brain has finite processing resources so that, as tasks become harder, performance degrades. Where do the limits on these resources come from? We focus on a variety of capacity-limited buffers related to attention, recognition, and memory that we claim have a two-dimensional ‘map’ architecture, where individual items compete for cortical real estate. This competitive format leads to capacity limits that are flexible, set by the nature of the content and their locations within an anatomically delimited space. We contrast this format with the standard ‘slot’ architecture and its fixed capacity. Using visual spatial attention and visual short-term memory as case studies, we suggest that competitive maps are a concrete and plausible architecture that limits cognitive capacity across many domains.
In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.