Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition.
Two related questions critical to understanding the predictive processes that come online during sentence comprehension are 1) what information is included in the representation created through prediction and 2) at what functional stage does top-down, predicted information begin to affect bottom-up word processing? We investigated these questions by recording event-related potentials (ERPs) as participants read sentences that ended with expected words or with unexpected items (words, pseudowords, or illegal strings) that were either orthographically unrelated to the expected word or were one of its orthographic neighbors. The data show that, regardless of lexical status, attempts at semantic access (N400) for orthographic neighbors of expected words is facilitated relative to the processing of orthographically unrelated items. Our findings support a view of sentence processing wherein orthographically organized information is brought online by prediction and interacts with input prior to any filter on lexical status.
Visual word recognition is a process that, both hierarchically and in parallel, draws on different types of information ranging from perceptual to orthographic to semantic. A central question concerns when and how these different types of information come online and interact after a word form is initially perceived. Numerous studies addressing aspects of this question have been conducted with a variety of techniques (e.g., behavior, eye-tracking, ERPs), and divergent theoretical models, suggesting different overall speeds of word processing, have coalesced around clusters of mostly method-specific results. Here, we examine the time course of influence of variables ranging from relatively perceptual (e.g., bigram frequency) to relatively semantic (e.g., number of lexical associates) on ERP responses, analyzed at the single item level. Our results, in combination with a critical review of the literature, suggest methodological, analytic, and theoretical factors that may have led to inconsistency in results of past studies; we will argue that consideration of these factors may lead to a reconciliation between divergent views of the speed of word recognition.
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision—a benchmark task for reading models. Simulations reveal that the model’s success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.