In three spatial cueing experiments, we investigated whether a negative search criterion (i.e., a task-relevant feature that negatively defines the target) can guide visual attention in a top-down manner. Our participants searched for a target defined by a negative feature (e.g., red if the target was a nonred horizontal bar). Before the target, a peripheral singleton cue was shown at the target position (valid condition) or a nontarget position (invalid condition). We found slower reaction times in valid than invalid trials only with singleton cues matching the negative feature. Importantly, we ruled out that participants searched for target-associated features instead of suppressing the negative feature (Experiment 1). Furthermore, we demonstrated that suppression of cues with a negative feature was stronger than mere ignorance of singleton cues with a task-irrelevant feature. Finally, cue-target intervals of 60 ms and 150 ms elicited the same suppression effects for cues matching the negative feature. These findings suggest that the usage of a negative search criterion elicited feature-selective proactive suppression (Experiments 2 and 3). Thus, our results provide first evidence of top-down attentional suppression dependent on current task goals as a strategy operating in parallel to the goal-directed search for targetdefining features (Experiment 2). Public Significance StatementPrevious research has indicated that even if features should be ignored, the to-be-ignored features nevertheless capture attention. However, we showed that if participants searched for a target defined by a negative feature (i.e., the target was a nonred horizontal bar), the negative feature (here: the color red) was actively suppressed during attentional guidance. Our results extend the knowledge of attentional guidance by showing selective suppression of task-relevant features that negatively define the target.
We exhaustively review the published research on eye movements during real-world night driving, which is an important field of research as fatal road traffic accidents at night outnumber fatal accidents during the daytime. Eye tracking provides a unique window into the underlying cognitive processes. The studies were interpreted and evaluated against the background of two descriptions of the driving task: Gibson and Crooks’ (1938) description of driving as the visually guided selection of a driving path through the unobstructed field of safe travel; and Endsley’s (1995) situation awareness model, highlighting the influence of drivers’ interpretations and mental capacities (e.g., cognitive load, memory capacity, etc.) for successful task performance. Our review unveiled that drivers show expedient looking behavior, directed to the boundaries of the field of safe travel and other road users. Thus, the results indicated that controlled (intended) eye movements supervened, but some results could have also reflected automatic gaze attraction by salient but task-irrelevant distractors. Also, it is not entirely certain whether a wider dispersion of eye fixations during daytime driving (compared to night driving) reflected controlled and beneficial strategies, or whether it was (partly) due to distraction by stimuli unrelated to driving. We concluded by proposing a more fine-grained description of the driving task, in which the contribution of eye movements to three different subtasks is detailed. This model could help filling an existing gap in the reviewed research: Most studies did not relate eye movements to other driving performance measurements for the evaluation of real-world night driving performance.
It is still unclear which features of a two-dimensional shape (e.g., triangle, square) can efficiently guide visual attention. Possible guiding features are edge orientations (single oriented shape edges; e.g., verticals during search for squares), global outlines (combination of the target edges; e.g., squares), or global orientations (specific orientations of global outlines; e.g., squares but not diamonds). Using a contingent-capture protocol, we found evidence for task-dependent guidance by the global shape outline and the global shape orientation. First, if participants searched for a shape (an equilateral triangle) independent of its pointing direction, cues with the same global shape outline as the target captured attention, even without sharing any edge orientations with the target. Second, however, if a shape's specific pointing direction was task-relevant, attentional guidance changed to the specific orientation of the global shape. Our results show that the global shape outline and the global shape orientation can both guide visual attention, contingent on the nature of the shape and the current search goals. We discuss differences between shapes (equilateral triangles and isosceles trapezoids) considering models of shape perception and conclude with a critical review of the contingent-capture protocol as a complementary method to visual search protocols. Public Significance StatementThis study shows that when we search for a two-dimensional object, its shape can be used to guide our visual attention so that we can efficiently find the object. In contrast, oriented edges of such a two-dimensional object are not sufficient to explain successful search for shapes. Whether the orientation of a shape is also used to guide attention depends on the shape itself and on the necessities imposed by the search context.
Visual attention and saccadic eye movements are linked in a tight, yet flexible fashion. In humans, this link is typically studied with dual‐task setups. Participants are instructed to execute a saccade to some target location, while a discrimination target is flashed on a screen before the saccade can be made. Participants are also instructed to report a specific feature of this discrimination target at the trial end. Discrimination performance is usually better if the discrimination target occurred at the same location as the saccade target compared to when it occurred at a different location, which is explained by the mandatory shift of attention to the saccade target location before saccade onset. This pre‐saccadic shift of attention presumably enhances the perception of the discrimination target if it occurred at the same, but not if it occurred at a different location. It is, however, known that a dual‐task setup can alter the primary process under investigation. Here, we directly compared pre‐saccadic attention in single‐task versus dual‐task setups using concurrent electroencephalography (EEG) and eye‐tracking. Our results corroborate the idea of a pre‐saccadic shift of attention. They, however, question that this shift leads to the same‐position discrimination advantage. The relation of saccade and discrimination target position affected the EEG signal only after saccade onset. Our results, thus, favor an alternative explanation based on the role of saccades for the consolidation of sensory and short‐term memory. We conclude that studies with dual‐task setups arrived at a valid conclusion despite not measuring exactly what they intended to measure.
In the current review, we argue that experimental results usually interpreted as evidence for cognitive resource limitations could also reflect functional necessities of human information processing. First, we point out that selective processing of only specific features, objects, or locations at each moment in time allows humans to monitor the success and failure of their own overt actions and covert cognitive procedures. We then proceed to show how certain instances of selectivity are at odds with commonly assumed resource limitations. Next, we discuss examples of seemingly automatic, resource-free processing that challenge the resource view but can be easily understood from the functional perspective of monitoring cognitive procedures. Finally, we suggest that neurophysiological data supporting resource limitations might actually reflect mechanisms of how procedural control is implemented in the brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.