We investigated whether lateral masking in the near-periphery, due to inhibitory lateral interactions at an early level of central visual processing, could be weakened by perceptual learning and whether learning transferred to an untrained, higher-level lateral masking known as crowding. The trained task was contrast detection of a Gabor target presented in the near periphery (4°) in the presence of co-oriented and co-aligned high contrast Gabor flankers, which featured different target-to-flankers separations along the vertical axis that varied from 2λ to 8λ. We found both suppressive and facilitatory lateral interactions at target-to-flankers distances (2λ - 4λ and 8λ, respectively) that were larger than those found in the fovea. Training reduces suppression but does not increase facilitation. Most importantly, we found that learning reduces crowding and improves contrast sensitivity, but has no effect on visual acuity (VA). These results suggest a different pattern of connectivity in the periphery with respect to the fovea as well as a different modulation of this connectivity via perceptual learning that not only reduces low-level lateral masking but also reduces crowding. These results have important implications for the rehabilitation of low-vision patients who must use peripheral vision to perform tasks, such as reading and refined figure-ground segmentation, which normal sighted subjects perform in the fovea.
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
The last quarter of a century has seen a dramatic rise of interest in the development of technological solutions for visually impaired people. However, despite the presence of many devices, user acceptance is low. Not only are visually impaired adults not using these devices but they are also too complex for children. The majority of these devices have been developed without considering either the brain mechanisms underlying the deficit or the natural ability of the brain to process information. Most of them use complex feedback systems and overwhelm sensory, attentional and memory capacities. Here we review the neuroscientific studies on orientation and mobility in visually impaired adults and children and present the technological devices developed so far to improve locomotion skills. We also discuss how we think these solutions could be improved. We hope that this paper may be of interest to neuroscientists and technologists and it will provide a common background to develop new science-driven technology, more accepted by visually impaired adults and suitable for children with visual disabilities.
There is strong evidence of shared neurophysiological substrates for visual and vestibular processing that likely support our capacity for estimating our own movement through the environment. We examined behavioral consequences of these shared substrates in the form of crossmodal aftereffects. In particular, we examined whether sustained exposure to a visual self-motion stimulus (i.e., optic flow) induces a subsequent bias in nonvisual (i.e., vestibular) self-motion perception in the opposite direction in darkness. Although several previous studies have investigated self-motion aftereffects, none have demonstrated crossmodal transfer, which is the strongest proof that the adapted mechanisms are generalized for self-motion processing. The crossmodal aftereffect was quantified using a motion-nulling procedure in which observers were physically translated on a motion platform to find the movement required to cancel the visually induced aftereffect. Crossmodal transfer was elicited only with the longest-duration visual adaptor (15 s), suggesting that transfer requires sustained vection (i.e., visually induced self-motion perception). Visual-only aftereffects were also measured, but the magnitudes of visual-only and crossmodal aftereffects were not correlated, indicating distinct underlying mechanisms. We propose that crossmodal aftereffects can be understood as an example of contingent or contextual adaptation that arises in response to correlations across signals and functions to reduce these correlations in order to increase coding efficiency. According to this view, crossmodal aftereffects in general (e.g., visual-auditory or visual-tactile) can be explained as accidental manifestations of mechanisms that constantly function to calibrate sensory modalities with each other as well as with the environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.