Previous work has shown that abrupt visual onsets capture attention. Possible attention. Possible mechanisms for this phenomenon include (a) a luminance-change detection system and (b) a mechanism that detects the appearance of new perceptual objects. Experiments 1 and 2 revealed that attention is captured in visual search by the appearance of a new perceptual object even when the object is equiluminant with its background and thus exhibits no luminance change when it appears. Experiment 3 showed that a highly salient luminance increment alone is not sufficient to capture attention. These findings suggest that attentional capture is mediated by a mechanism that detects the appearance of new perceptual objects.
Maljkovic and Nakayama (1994) demonstrated an automatic benefit of repeating the defining feature of the target in search guided by salience. Thus, repetition influences target selection in search guided by bottom-up factors. Four experiments demonstrate this repetition effect in search guided by topdown factors, and so the repetition effect is not merely part of the mechanism for determining what display elements are salient. The effect is replicated in singleton search and in three situations requiring different degrees of top-down guidance: when the feature defining the target is less salient than the feature defining the response, when there is more than one singleton in the defining dimension, and when the target is defined by a conjunction of features. Repetition does not change the priorities of targets, relative to distractors: Display size affects search equally whether the target is repeated or changed. More than one mechanism may underlie the repetition effect in different experiments, but assuming that there is a unitary mechanism, a short-term episodic memory mechanism is proposed.Visual search for a target amid multiple nontargets, or distractors, is a complex perceptual task that can be used to study visual attention. Most models ofvisual search have two stages involved in target selection (Egeth, 1977). The first is a pre attentive stage that registers visual features in parallel and segments the visual scene into coarsely represented objects, according to principles such as those outlined by Marr (1982). This first stage also yields a prioritization of display elements that subsequently guides attention, a prioritization that is theorized to be influenced by both bottom-up factors (e.g., the perceptibility, or salience, of the object in the scene) and top-down factors (e.g., the foreknowledge and strategies of the searcher). The preattentive stage is followed by a stage in which attention is focused on elements according to the priorities assigned in the first stage, in order to identify the elements better and choose responses.Memory is crucial to visual search: A representation of what defines the target must be held in memory in order to know when search has ended. Memory for target definitions is particularly important in the preattentive prioritization stage if the target is well defined, as when a searcher knows exactly what the target will look like. If the target is more conceptually defined-for instance, when the searcher knows only that the target will look different from the distractors-it is less obvious what sort of explicit target template would be used to set priorities. But recent work by Maljkovic and Nakayama (1994) on repetition effects in search for salient targets has demonstrated a role for implicit memory in search that is guided largely by bottom-up factors, rather than by a well-elaborated target definition. The participants responded to the orientation of a target that was defined as a color singleton (either the single red element among green elements or the single gr...
Previous work has shown that abrupt visual onsets capture attention. This occurs even with stimuli that are equiluminant with the background, which suggests that the appearance of a new perceptual object, not merely a change in luminance, captures attention. Three experiments are reported in which this work was extended by investigating the possible role of visual motion in attentional capture. Experiment 1 revealed that motion can efficiently guide attention when it is perfectly informative about the location of a visual search target, but that it does not draw attention when it does not predict the target's position. This result was obtained with several forms of motion, including oscillation, looming, and nearby moving contours. To account for these and other results, we tested a new-object account of attentional capture in Experiment 2 by using a global/local paradigm. When motion segregated a local letter from its perceptual group, the local letter captured attention as indexed by an effect on latency of response to the task-relevant global configuration. Experiment 3 ruled out the possibility that the motion in Experiment 2 captured attention merely by increasing the salience of the moving object. We argue instead that when motion segregates a perceptual element from a perceptual group, a new perceptual object is created, and this event captures attention. Together, the results suggest that motion as such does not capture attention but that the appearance of a new perceptual object does.Motion, like many other visual features, is registered effortlessly by the human visual system. For example, discontinuities in motion can be detected in visual search without attentional scrutiny (McLeod, Driver, & Crisp, 1988;Nakayama & Silverman, 1986), just as salient discontinuities in color and orientation can be (see, e.g., Treisman & Gormican, 1988). These findings suggest that visual motion can be detected without attention; motion is therefore often categorized as one of the "building blocks" of early vision. This has contributed to the belief that attention is involuntarily captured by moving objects: A natural way to draw someone's attention is to wave one's arms, and William James listed "moving things" among the sensorial stimuli to which attention is drawn involuntarily (James, 1890(James, /1950). Yet although findings support the claim that motion can be detected effortlessly when it is the target of search, there currently exists no direct evidence about whether motion captures attention.The argument that motion captures attention runs as follows: (1) Attributes that can be detected without attentional scrutiny (i.e., attributes that "pop out" of a disThis research was supported by National Institute of Mental Health Grant ROI-MH43924 to Steven Yantis. Portions of this research were reported at the 1992 meeting of the Eastern Psychological Association in Boston and at the 1992 meeting of the Psychonomic Society. Bill Bacon, Howard Egeth, and Brad Gibson provided valuable suggestions concerning the research....
The human cortical visual system is organized into major pathways: a dorsal stream projecting to the superior parietal lobe (SPL), considered to be critical for visuospatial perception or on-line control of visually guided movements, and a ventral stream leading to the inferotemporal cortex, mediating object perception. Between these structures lies a large region, consisting of the inferior parietal lobe (IPL) and superior temporal gyrus (STG), the function of which is controversial. Lesions here can lead to spatial neglect, a condition associated with abnormal visuospatial perception as well as impaired visually guided movements, suggesting that the IPL+STG may have largely a "dorsal" role. Here, we use a nonspatial task to examine the deployment of visuotemporal attention in focal lesion patients, with or without spatial neglect. We show that, regardless of the presence of neglect, damage to the IPL+STG leads to a more prolonged deployment of visuotemporal attention compared to lesions of the SPL. Our findings suggest that the human IPL+STG makes an important contribution to nonspatial perception, and this is consistent with a role that is neither strictly "dorsal" nor "ventral". We propose instead that the IPL+STG has a top-down control role, contributing to the functions of both dorsal and ventral visual systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.