Purpose We examined whether school-age children with a history of SLI (H-SLI), their typically developing (TD) peers, and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method 15 H-SLI children, 15 TD children, and 15 adults judged whether a flashed explosion-shaped figure and a 2 kHz pure tone occurred simultaneously. The stimuli were presented at 0, 100, 200, 300, 400, and 500 ms temporal offsets. This task was combined with EEG recordings. Results H-SLI children were profoundly less sensitive to temporal separations between auditory and visual modalities compared to their TD peers. Those H-SLI children who performed better at simultaneity judgment also had higher language aptitude. TD children were less accurate than adults, revealing a remarkably prolonged developmental course of the audiovisual temporal discrimination. Analysis of early ERP components suggested that poor sensory encoding was not a key factor in H-SLI children’s reduced sensitivity to audiovisual asynchrony. Conclusions Audiovisual temporal discrimination is impaired in H-SLI children and is still immature during mid-childhood in TD children. The present findings highlight the need for further evaluation of the role of atypical audiovisual processing in the development of SLI.
Using electrophysiology, we have examined two questions in relation to musical training – namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in sounds’ timbre. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 ERP component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We, therefore, conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians’ accuracy tended to suffer less from the change in sounds’ timbre, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension.
Purpose Recent behavioral studies have demonstrated the effectiveness of implementing retrieval practice into learning tasks for children. Such approaches have revealed that repeated spaced retrieval (RSR) is particularly effective in promoting children's learning of word form and meaning information. This study further examines how retrieval practice enhances learning of word meaning information at the behavioral and neural levels. Method Twenty typically developing preschool children were taught novel words using an RSR learning schedule for some words and an immediate retrieval (IR) learning schedule for other words. In addition to the label, children were taught two arbitrary semantic features for each item. Following the teaching phase, children's learning was tested using recall tests. In addition, during the 1-week follow-up, children were presented with pictures and an auditory sentence that correctly labeled the item but stated correct or incorrect semantic information. Event-related brain potentials (ERPs) were time locked to the onset of the words noting the semantic feature. Children provided verbal judgments of whether the semantic feature was correctly paired with the item. Results Children recalled more labels and semantic features for items that had been taught in the RSR learning schedule relative to the IR learning schedule. ERPs also differentiated the learning schedules. Mismatching label–meaning pairings elicited an N400 and late positive component (LPC) for both learning conditions; however, mismatching RSR pairs elicited an N400 with an earlier onset and an LPC with a longer duration, relative to IR mismatching label–meaning pairings. These ERP timing differences indicated that the children were more efficient in processing words that were taught in the RSR schedule relative to the IR learning schedule. Conclusions Spaced retrieval practice promotes learning of both word form and meaning information. The findings lay the necessary groundwork for better understanding of processing newly learned semantic information in preschool children. Supplemental Material https://doi.org/10.23641/asha.15063060
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals’ SIN accuracy improvement in the presence of the talker’s face.
Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.