Abacus-based mental calculation (AMC) involves temporary storage and manipulation of an imaginary abacus closely related to the function of visuospatial working memory (VSWM). The present study thus investigated the effects of AMC training on VSWM and its neural correlates. A total of 144 human subjects (67 boys) were assigned to AMC or control groups at their entry to primary school. The AMC group received 2 h AMC training per week for 5 school years, whereas the control group spent the time in activities, such as conventional calculation and reading. Raven's Intelligence Test was administered both before and after training. Two arithmetic tests and a VSWM task were conducted after training. Among these participants, fMRI data were collected from 64 children for the VSWM task. Behavioral results indicated that the AMC group outperformed controls on both arithmetic and VSWM tasks, but not on Raven's Intelligence Test. While the two groups activated similar regions during the VSWM task, the AMC group showed greater activation than the controls in frontal, parietal, and occipital areas. Interestingly, the activation of right middle frontal gyrus mediated the relation between the arithmetic ability and the VSWM performance in the AMC group, suggesting that the frontal region may be the neural substrate underlying the transfer effect from AMC training to VSWM. Although the transfer effects seem quite limited considering the length and intensity of the training, these findings suggest that long-term AMC training not only improves arithmetic ability but also has a potential positive effect on VSWM.
Recognizing speech in noisy environments is a challenging task that involves both auditory and language mechanisms. Previous studies have demonstrated noise-robust neural tracking of the speech envelope, i.e., fluctuations in sound intensity, in human auditory cortex, which provides a plausible neural basis for noise-robust speech recognition. The current study aims at teasing apart auditory and language contributions to noise-robust envelope tracking by comparing 2 groups of listeners, i.e., native listeners of the testing language and foreign listeners who do not understand the testing language. In the experiment, speech is mixed with spectrally matched stationary noise at 4 intensity levels and the neural responses are recorded using electroencephalography (EEG). When the noise intensity increases, an increase in neural response gain is observed for both groups of listeners, demonstrating auditory gain control mechanisms. Language comprehension creates no overall boost in the response gain or the envelope-tracking precision but instead modulates the spatial and temporal profiles of envelopetracking activity. Based on the spatio-temporal dynamics of envelope-tracking activity, the 2 groups of listeners and the 4 levels of noise intensity can be jointly decoded by a linear classifier. All together, the results show that without feedback from language processing, auditory mechanisms such as gain control can lead to a noise-robust speech representation. High-level language processing, however, further modulates the spatial-temporal profiles of the neural representation of the speech envelope.
Recognizing speech in noisy environments is a challenging task that involves both auditory and language mechanisms. Previous studies have demonstrated noise-robust neural tracking of the speech envelope, i.e., fluctuations in sound intensity, in human auditory cortex, which provides a plausible neural basis for noise-robust speech recognition. The current study aims at teasing apart auditory and language contributions to noise-robust envelope tracking by comparing 2 groups of listeners, i.e., native listeners of the testing language and foreign listeners who do not understand the testing language. In the experiment, speech is mixed with spectrally matched stationary noise at 4 intensity levels and the neural responses are recorded using electroencephalography (EEG). When the noise intensity increases, an increase in neural response gain is observed for both groups of listeners, demonstrating auditory gain control mechanisms. Language comprehension creates no overall boost in the response gain or the envelope-tracking precision but instead modulates the spatial and temporal profiles of envelopetracking activity. Based on the spatio-temporal dynamics of envelope-tracking activity, the 2 groups of listeners and the 4 levels of noise intensity can be jointly decoded by a linear classifier. All together, the results show that without feedback from language processing, auditory mechanisms such as gain control can lead to a noise-robust speech representation. High-level language processing, however, further modulates the spatial-temporal profiles of the neural representation of the speech envelope.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.