Sleep is beneficial for various types of learning and memory, including a finger-tapping motor-sequence task. However, methodological issues hinder clarification of the crucial cortical regions for sleep-dependent consolidation in motor-sequence learning. Here, to investigate the core cortical region for sleep-dependent consolidation of finger-tapping motor-sequence learning, while human subjects were asleep, we measured spontaneous cortical oscillations by magnetoencephalography together with polysomnography, and sourcelocalized the origins of oscillations using individual anatomical brain information from MRI. First, we confirmed that performance of the task at a retest session after sleep significantly increased compared with performance at the training session before sleep. Second, spontaneous ␦ and fast-oscillations significantly increased in the supplementary motor area (SMA) during post-training compared with pretraining sleep, showing significant and high correlation with the performance increase. Third, the increased spontaneous oscillations in the SMA correlated with performance improvement were specific to slow-wave sleep. We also found that correlations of ␦ oscillation between the SMA and the prefrontal and between the SMA and the parietal regions tended to decrease after training. These results suggest that a core brain region for sleep-dependent consolidation of the finger-tapping motor-sequence learning resides in the SMA contralateral to the trained hand and is mediated by spontaneous ␦ and fast-oscillations, especially during slow-wave sleep. The consolidation may arise along with possible reorganization of a larger-scale cortical network that involves the SMA and cortical regions outside the motor regions, including prefrontal and parietal regions.
How do humans rapidly recognize a scene? How can neural models capture this biological competence to achieve state-of-the-art scene classification? The ARTSCENE neural system classifies natural scene photographs by using multiple spatial scales to efficiently accumulate evidence for gist and texture. ARTSCENE embodies a coarse-to-fine Texture Size Ranking Principle whereby spatial attention processes multiple scales of scenic information, from global gist to local textures, to learn and recognize scenic properties. The model can incrementally learn and rapidly predict scene identity by gist information alone, and then accumulate learned evidence from scenic textures to refine this hypothesis. The model shows how texture-fitting allocations of spatial attention, called attentional shrouds, can facilitate scene recognition, particularly when they include a border of adjacent textures. Using grid gist plus three shroud textures on a benchmark photograph dataset, ARTSCENE discriminates 4 landscape scene categories (coast, forest, mountain, and countryside) with up to 91.85% correct on a test set, outperforms alternative models in the literature which use biologically implausible computations, and outperforms component systems that use either gist or texture information alone.
How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal c ortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.3
Previous infant studies on the other-race effect have favored the perceptual narrowing view, or declined sensitivities to rarely exposed other-race faces. Here we wish to provide an alternative possibility, perceptual learning, manifested by improved sensitivity for frequently exposed own-race faces in the first year of life. Using the familiarization/visual-paired comparison paradigm, we presented 4-, 6-, and 9-month-old Taiwanese infants with oval-cropped Taiwanese, Caucasian, Filipino faces, and each with three different manipulations of increasing task difficulty (i.e., change identity, change eyes, and widen eye spacing). An adult experiment was first conducted to verify the task difficulty. Our results showed that, with oval-cropped faces, the 4 month-old infants could only discriminate Taiwanese “change identity” condition and not any others, suggesting an early own-race advantage at 4 months. The 6 month-old infants demonstrated novelty preferences in both Taiwanese and Caucasian “change identity” conditions, and proceeded to the Taiwanese “change eyes” condition. The 9-month-old infants demonstrated novelty preferences in the “change identity” condition of all three ethnic faces. They also passed the Taiwanese “change eyes” condition but could not extend this refined ability of detecting a change in the eyes for the Caucasian or Philippine faces. Taken together, we interpret the pattern of results as evidence supporting perceptual learning during the first year: the ability to discriminate own-race faces emerges at 4 months and continues to refine, while the ability to discriminate other-race faces emerges between 6 and 9 months and retains at 9 months. Additionally, the discrepancies in the face stimuli and methods between studies advocating the narrowing view and those supporting the learning view were discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.