Summary From moment to moment, we perceive objects in the world as continuous despite fluctuations in their image properties due to factors like occlusion, visual noise, and eye movements. The mechanism by which the visual system accomplishes this object continuity remains elusive. Recent results have demonstrated that the perception of low-level stimulus features such as orientation and numerosity is systematically biased (i.e., pulled) toward visual input from the recent past [1, 2]. The spatial region over which current orientations are pulled by previous orientations is known as the continuity field, which is temporally tuned for the past 10–15 s [1]. This perceptual pull could contribute to the visual stability of low-level features over short time periods, but it does not address how visual stability occurs at the level of object identity. Here, we tested whether the visual system facilitates stable perception by biasing current perception of a face, a complex and behaviorally relevant object, toward recently seen faces. We found that perception of face identity is systematically biased toward identities seen up to several seconds prior, even across changes in viewpoint. This effect did not depend on subjects’ prior responses or on the method used to measure identity perception. Although this bias in perceived identity manifests as a misperception, it is adaptive: visual processing echoes the stability of objects in the world to create perceptual continuity. The serial dependence of identity perception promotes object identity invariance over time and provides the clearest evidence for the existence of an object-selective perceptual continuity field.
Summary It is unknown if the white matter properties associated with specific visual networks selectively affect category-specific processing. In a novel protocol we combined measurements of white matter structure, functional selectivity, and behavior in the same subjects. We find two parallel white matter pathways along the ventral temporal lobe connecting to either face-selective or place-selective regions. Diffusion properties of portions of these tracts adjacent to face- and place-selective regions of ventral temporal cortex correlate with behavioral performance for face or place processing, respectively. Strikingly, adults with developmental prosopagnosia (face blindness) express an atypical structure-behavior relationship near face-selective cortex, suggesting that white matter atypicalities in this region may have behavioral consequences. These data suggest that examining the interplay between cortical function, anatomical connectivity, and visual behavior is integral to understanding functional networks and their role in producing visual abilities and deficits.
Observers perceive objects in the world as stable over space and time, even though the visual experience of those objects is often discontinuous and distorted due to masking, occlusion, camouflage, or noise. How are we able to easily and quickly achieve stable perception in spite of this constantly changing visual input? It was previously shown that observers experience serial dependence in the perception of features and objects, an effect that extends up to 15 seconds back in time. Here, we asked whether the visual system utilizes an object's prior physical location to inform future position assignments in order to maximize location stability of an object over time. To test this, we presented subjects with small targets at random angular locations relative to central fixation in the peripheral visual field. Subjects reported the perceived location of the target on each trial by adjusting a cursor's position to match its location. Subjects made consistent errors when reporting the perceived position of the target on the current trial, mislocalizing it toward the position of the target in the preceding two trials (Experiment 1). This pull in position perception occurred even when a response was not required on the previous trial (Experiment 2). In addition, we show that serial dependence in perceived position occurs immediately after stimulus presentation, and it is a fast stabilization mechanism that does not require a delay (Experiment 3). This indicates that serial dependence occurs for position representations and facilitates the stable perception of objects in space. Taken together with previous work, our results show that serial dependence occurs at many stages of visual processing, from initial position assignment to object categorization.
We are continuously surrounded by a noisy and ever-changing environment. Instead of analyzing all the elements in a scene, our visual system has the ability to compress an enormous amount of visual information into ensemble representations, such as perceiving a forest instead of every single tree. Still, it is unclear why such complex scenes appear to be the same from moment to moment despite fluctuations, noise, and discontinuities in retinal images. The general effects of change blindness are usually thought to stabilize scene perception, making us unaware of minor inconsistencies between scenes. Here, we propose an alternative, that stable scene perception is actively achieved by the visual system through global serial dependencies: the appearance of scene gist is sequentially dependent on the gist perceived in previous moments. To test this hypothesis, we used summary statistical information as a proxy for “gist” level, global information in a scene. We found evidence for serial dependence in summary statistical representations. Furthermore, we show that this kind of serial dependence occurs at the ensemble level, where local elements are already merged into global representations. Taken together, our results provide a mechanism through which serial dependence can promote the apparent consistency of scenes over time.
Individuals can quickly and effortlessly recognize facial expressions, which is critical for social perception and emotion regulation. This sensitivity to even slight facial changes could result in unstable percepts of an individual's expression over time. The visual system must therefore balance accuracy with maintaining perceptual stability. However, previous research has focused on our sensitivity to changing expressions, and the mechanism behind expression stability remains an open question. Recent results demonstrate that perception of facial identity is systematically biased toward recently seen visual input. This positive perceptual pull, or serial dependence, may help stabilize perceived expression. To test this, observers judged random facial expression morphs ranging from happy to sad to angry. We found a pull in perceived expression toward previously seen expressions, but only when the 1-back and current face had similar identities. Our results are consistent with the existence of the continuity field for expression, a specialized mechanism that promotes the stability of emotion perception, which could help facilitate social interactions and emotion regulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.