During natural reading, parafoveal information is processed to some degree. Although isolated words can be fully processed in the parafovea, not all sentence reading experiments have found evidence of semantic processing in the parafovea. We suggest a possible reconciliation for these mixed results via two ERP studies in which volunteers read sentences presented word by word at fixation, flanked bilaterally by the next word to its right and the previous word to its left. Half the words in the right parafovea of critical triads and in the fovea for the subsequent triad were semantically incongruent. The conditions under which parafoveal words elicit canonical visual N400 congruity effects suggest that they are processed in parallel with foveal words, but that the extraction of semantic information parafoveally is a function of contextual constraint and presentation rate, most likely under high contextual constraint and at slower rates.
This study analyzed the electrophysiological correlates of language switching in second language learners. Participants were native Spanish speakers classified in two groups according to English proficiency (high and low). Event-related potentials (ERPs) were recorded while they read English sentences, half of which contained an adjective in Spanish in the middle of the sentence. The ERP results show the time-course of language switch processing for both groups: an initial detection of the switch driven by language-specific orthography (left-occipital N250) followed by costs at the level of the lexico-semantic system (N400), and finally a late updating or reanalysis process (LPC). In the high proficiency group, effects in the N400 time window extended to left anterior electrodes and were followed by larger LPC amplitudes at posterior sites. These differences suggest that proficiency modulates the different processes triggered by language switches.
Producing a word is often complicated by the fact that there are other words that share meaning with the intended word. The competition between words that arises in such a situation is a well-known phenomenon in the word production literature. An ongoing debate in a number of research domains has concerned the question of how competition between words is resolved. Here, we contributed to the debate by presenting evidence that indicates that resolving competition during word production involves a postretrieval mechanism of conflict resolution. Specifically, we tracked the time course of competition during word production using electroencephalography. In the experiment, participants named pictures in contexts that varied in the strength of competition. The electrophysiological data show that competition is associated with a late, frontally distributed component that arises between 500 and 750 ms after picture presentation. These data are interpreted in terms of a model of word production that relies on a mechanism of cognitive control.
Speech production is a complex skill whose neural implementation relies on a large number of different regions in the brain. How neural activity in these different regions varies as a function of time during the production of speech remains poorly understood. Previous MEG studies on this topic have concluded that activity proceeds from posterior to anterior regions of the brain in a sequential manner. Here we tested this claim using the EEG technique. Specifically, participants performed a picture naming task while their naming latencies and scalp potentials were recorded. We performed group temporal independent component Analysis (group ticA) to obtain temporally independent component timecourses and their corresponding topographic maps. We identified fifteen components whose estimated neural sources were located in various areas of the brain. The trial-by-trial component timecourses were predictive of the naming latency, implying their involvement in the task. Crucially, we computed the degree of concurrent activity of each component timecourse to test whether activity was sequential or parallel. Our results revealed that these fifteen distinct neural sources exhibit largely concurrent activity during speech production. These results suggest that speech production relies on neural activity that takes place in parallel networks of distributed neural sources.It is now well understood that the production of speech relies on neural activity in a wide range of different areas of the brain e.g. [1][2][3][4][5][6][7] . How this activity is coordinated over time such that it results in fast and fluent speech remains largely unknown. Therapy design for various pathologies such as aphasia, dysarthria and stuttering requires first a good understanding of the workings of speech production under non-pathological circumstances. Here we examined the activation dynamics of different areas of the brain while speech was produced. Specifically, speech production was elicited using a picture naming task in which participants overtly produced single words in response to visually presented objects. Previous Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI) studies have repeatedly shown that picture naming yields activation in occipital, temporal, frontal and parietal areas of the cortex, as well as in the striatum, thalamus and brain stem of the subcortex e.g. [8][9][10][11][12] . Although these studies are informative about the location of brain activity underlying picture naming, other techniques such as Magnetoencephalography (MEG) and Electroencephalography (EEG) are needed to examine the precise temporal dynamics of neural activity underlying the task. With respect to this issue, previous MEG studies have concluded that neural activation underlying picture naming proceeds from posterior to anterior areas of the brain in a sequential manner e.g. [13][14][15][16][17][18] . Here we attempted to validate this conclusion from MEG studies by using a different analysis approach that relied on EEG data...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.