Perceptual skills can be improved through practice on a perceptual task, even in adulthood. Visual perceptual learning is known to be mostly specific to the trained retinal location, which is considered as evidence of neural plasticity in retinotopic early visual cortex. Recent findings demonstrate that transfer of learning to untrained locations can occur under some specific training procedures. Here, we evaluated whether exogenous attention facilitates transfer of perceptual learning to untrained locations, both adjacent to the trained locations (Experiment 1) and distant from them (Experiment 2). The results reveal that attention facilitates transfer of perceptual learning to untrained locations in both experiments, and that this transfer occurs both within and across visual hemifields. These findings show that training with exogenous attention is a powerful regime that is able to overcome the major limitation of location specificity.
Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding.
Perceptual learning is a sustainable improvement in performance on a perceptual task following training. A hallmark of perceptual learning is task specificity – after participants have trained on and learned a particular task, learning rarely transfers to another task, even with identical stimuli. Accordingly, it is assumed that performing a task throughout training is a requirement for learning to occur on that specific task. Thus, interleaving training trials of a target task, with those of another task, should not improve performance on the target task. However, recent findings in audition show that interleaving two tasks during training can facilitate perceptual learning, even when the training on neither task yields learning on its own. Here we examined the role of cross-task training in the visual domain by training 4 groups of human observers for 3 consecutive days on an orientation comparison task (target task) and/or spatial-frequency comparison task (interleaving task). Interleaving small amounts of training on each task, which were ineffective alone, not only enabled learning on the target orientation task, as in audition, but also surpassed the learning attained by training on that task alone for the same total number of trials. This study illustrates that cross-task training in visual perceptual learning can be more effective than single-task training. The results reveal a comparable learning principle across modalities and demonstrate how to optimize training regimens to maximize perceptual learning.
Recent advances in head-mounted displays (HMDs) present an opportunity to design vision enhancement systems for people with low vision , whose vision cannot be corrected with glasses or contact lenses. We aim to understand whether and how HMDs can aid low vision people in their daily lives. We designed ForeSee , an HMD prototype that enhances people’s view of the world with image processing techniques such as magnification and edge enhancement. We evaluated these vision enhancements with 20 low vision participants who performed four viewing tasks: image recognition and reading tasks from near- and far-distance. We found that participants needed to combine and adjust the enhancements to comfortably complete the viewing tasks. We then designed two input modes to enable fast and easy customization: speech commands and smartwatch-based gestures. While speech commands are commonly used for eyes-free input, our novel set of onscreen gestures on a smartwatch can be used in scenarios where speech is not appropriate or desired. We evaluated both input modes with 11 low vision participants and found that both modes effectively enabled low vision users to customize their visual experience on the HMD. We distill design insights for HMD applications for low vision and spur new research directions.
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.