Summary Practice improves discrimination of many basic visual features, such as contrast, orientation, positional offset, etc. [1–7]. Perceptual learning of many of these tasks is found to be retinal location specific, in that learning transfers little to an untrained retinal location [1, 6–8]. In most perceptual learning models, this location specificity is interpreted as a pointer to a retinotopic early visual cortical locus of learning [1, 6–11]. Alternatively, an untested hypothesis is that learning could occur in a central site, but it consists of two separate aspects: learning to discriminate a specific stimulus feature (“feature learning”), and learning to deal with stimulus non-specific factors like local noise at the stimulus location (“location learning”) [12]. Therefore, learning is not transferable to a new location that has never been location-trained. To test this hypothesis we developed a novel double-training paradigm that employed conventional feature training (e.g., contrast) at one location, and additional training with an irrelevant feature/task (e.g. orientation) at a second location, either simultaneously or at a different time. Our results showed that this additional location training enabled a complete transfer of feature learning (e.g., contrast) to the second location. This finding challenges location specificity and its inferred cortical retinotopy as central concepts to many perceptual learning models, and suggests perceptual learning involves higher non-retinotopic brain areas that enable location transfer.
Visual perceptual learning models, as constrained by orientation and location specificities, propose that learning either reflects changes in V1 neuronal tuning or reweighting specific V1 inputs in either the visual cortex or higher areas. Here we demonstrate that, with a training-plus-exposure procedure, in which observers are trained at one orientation and either simultaneously or subsequently passively exposed to a second transfer orientation, perceptual learning can completely transfer to the second orientation in tasks known to be orientation-specific. However, transfer fails if exposure precedes the training. These results challenge the existing specific perceptual learning models by suggesting a more general perceptual learning process. We propose a rule-based learning model to explain perceptual learning and its specificity and transfer. In this model, a decision unit in high-level brain areas learns the rules of reweighting the V1 inputs through training. However, these rules cannot be applied to a new orientation/location because the decision unit cannot functionally connect to the new V1 inputs that are unattended or even suppressed after training at a different orientation/location, which leads to specificity. Repeated orientation exposure or location training reactivates these inputs to establish the functional connections and enable the transfer of learning.
Perceptual learning of orientation discrimination is reported to be precisely specific to the trained retinal location. This specificity is often taken as evidence for localizing the site of orientation learning to retinotopic cortical areas V1/V2. However, the extant physiological evidence for training improved orientation turning in V1/V2 neurons is controversial and weak. Here we demonstrate substantial transfer of orientation learning across retinal locations, either from the fovea to the periphery or amongst peripheral locations. Most importantly, we found that a brief pretest at a peripheral location before foveal training enabled complete transfer of learning, so that additional practice at that peripheral location resulted in no further improvement. These results indicate that location specificity in orientation learning depends on the particular training procedures, and is not necessarily a genuine property of orientation learning. We suggest that non-retinotopic high brain areas may be responsible for orientation learning, consistent with the extant neurophysiological data.
Perceptual learning of visual features occurs when multiple stimuli are presented in a fixed sequence (temporal patterning), but not when they are presented in random order (roving). This points to the need for proper stimulus coding in order for learning of multiple stimuli to occur. We examined the stimulus coding rules for learning with multiple stimuli. Our results demonstrate that: (1) stimulus rhythm is necessary for temporal patterning to take effect during practice; (2) learning consolidation is subject to disruption by roving up to 4 h after each practice session; (3) importantly, after completion of temporal-patterned learning, performance is undisrupted by extended roving training; (4) roving is ineffective if each stimulus is presented for five or more consecutive trials; and (5) roving is also ineffective if each stimulus has a distinct identity. We propose that for multi-stimulus learning to occur, the brain needs to conceptually “tag” each stimulus, in order to switch attention to the appropriate perceptual template. Stimulus temporal patterning assists in tagging stimuli and switching attention through its rhythmic stimulus sequence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.