With each eye movement, the image of the world received by the visual system changes dramatically. To maintain stable spatiotopic (world-centered) visual representations, the retinotopic (eye-centered) coordinates of visual stimuli are continually remapped, even before the eye movement is completed. Recent psychophysical work has suggested that updating of attended locations occurs as well, although on a slower timescale, such that sustained attention lingers in retinotopic coordinates for several hundred milliseconds after each saccade. To explore where and when this "retinotopic attentional trace" resides in the cortical visual processing hierarchy, we conducted complementary functional magnetic resonance imaging and event-related potential (ERP) experiments using a novel gazecontingent task. Human subjects executed visually guided saccades while covertly monitoring a fixed spatiotopic target location. Although subjects responded only to stimuli appearing at the attended spatiotopic location, blood oxygen level-dependent responses to stimuli appearing after the eye movement at the previously, but no longer, attended retinotopic location were enhanced in visual cortical area V4 and throughout visual cortex. This retinotopic attentional trace was also detectable with higher temporal resolution in the anterior N1 component of the ERP data, a well established signature of attentional modulation. Together, these results demonstrate that, when top-down spatiotopic signals act to redirect visuospatial attention to new retinotopic locations after eye movements, facilitation transiently persists in the cortical regions representing the previously relevant retinotopic location.
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.