In prevailing approaches to human sentence comprehension, the outcome of the word recognition process is assumed to be a categorical representation with no residual uncertainty. Yet perception is inevitably uncertain, and a system making optimal use of available information might retain this uncertainty and interactively recruit grammatical analysis and subsequent perceptual input to help resolve it. To test for the possibility of such an interaction, we tracked readers' eye movements as they read sentences constructed to vary in ( i ) whether an early word had near neighbors of a different grammatical category, and ( ii ) how strongly another word further downstream cohered grammatically with these potential near neighbors. Eye movements indicated that readers maintain uncertain beliefs about previously read word identities, revise these beliefs on the basis of relative grammatical consistency with subsequent input, and use these changing beliefs to guide saccadic behavior in ways consistent with principles of rational probabilistic inference.
Within human sentence processing, it is known that there are large effects of a word's probability in context on how long it takes to read it. This relationship has been quantified using informationtheoretic surprisal, or the amount of new information conveyed by a word. Here, we compare surprisals derived from a collection of language models derived from n-grams, neural networks, and a combination of both. We show that the models' psychological predictive power improves as a tight linear function of language model linguistic quality. We also show that the size of the effect of surprisal is estimated consistently across all types of language models. These findings point toward surprising robustness of surprisal estimates and suggest that surprisal estimated by low-quality language models are not biased.
While much previous work on reading in languages with alphabetic scripts has suggested that reading is word-based, reading in Chinese has been argued to be less reliant on words. This is primarily because in the Chinese writing system words are not spatially segmented, and characters are themselves complex visual objects. Here, we present a systematic characterization of the effects of a wide range of word and character properties on eye movements in Chinese reading, using a set of mixed-effects regression models. The results reveal a rich pattern of effects of the properties of the current, previous, and next words on a range of reading measures, which is strikingly similar to the pattern of effects of word properties reported in spaced alphabetic languages. This finding provides evidence that reading shares a word-based core and may be fundamentally similar across languages with highly dissimilar scripts. We show that these findings are robust to the inclusion of character properties in the regression models, and are equally reliable when dependent measures are defined in terms of characters rather than words, providing strong evidence that word properties have effects in Chinese reading above and beyond characters. This systematic characterization of the effects of word and character properties in Chinese advances our knowledge of the processes underlying reading and informs the future development of models of reading. More generally, however, this work suggests that differences in script may not alter the fundamental nature of reading.
This research tests whether comprehenders use their knowledge of typical events in real time to process verbal arguments. In self-paced reading and event-related brain potential (ERP) experiments, we used materials in which the likelihood of a specific patient noun (brakes or spelling) depended on the combination of an agent and verb (mechanic checked vs. journalist checked). Reading times were shorter at the word directly following the patient for the congruent than the incongruent items. Differential N400s were found earlier, immediately at the patient. Norming studies ruled out any account of these results based on direct relations between the agent and patient. Thus, comprehenders dynamically combine information about real-world events based on intrasentential agents and verbs, and this combination then rapidly influences online sentence interpretation.Keywords sentence processing; psycholinguistics; language comprehension; event knowledge; self-paced reading; event-related potentials A number of recent studies have shown that comprehenders are sensitive to thematic fit, or the plausibility of a noun as an argument of a particular verb (e.g., Kamide, Altmann, & Haywood, 2003;McRae, Spivey-Knowlton, & Tanenhaus, 1998). But what is the mechanism that underlies this effect? On one well-established approach, the verb's lexical representation encodes information about typical fillers of its thematic roles, as well as information about the selectional restrictions that the verb imposes on its arguments. Thus when the verb is accessed it makes available information about appropriate role fillers, so that the interpretation of meaning relies crucially on this information from the verb's lexical representation. Another possibility, however, is that comprehenders dynamically compute an interpretation based on their knowledge of events and situations, relying on all available cues. In what follows we test this hypothesis, focusing on the issue of thematic fit. We begin by reviewing the work on that topic, and show that the full pattern of results is difficult to explain on a retrieval-based account. Instead, we argue that comprehenders rely on their knowledge of typical events and situations as they integrate information provided by not only the verb but other participants mentioned in discourse, in an attempt to generate expectancies 1 about upcoming words.© 2010 Elsevier Inc. All rights reserved.Corresponding author: Klinton Bicknell, kbicknell@ling.ucsd.edu, University of California, San Diego, Department of Linguistics #0108, 9500, Gilman Dr., La Jolla, CA 92093-0108. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all leg...
A number of results in the study of realtime sentence comprehension have been explained by computational models as resulting from the rational use of probabilistic linguistic information. Many times, these hypotheses have been tested in reading by linking predictions about relative word difficulty to word-aggregated eye tracking measures such as go-past time. In this paper, we extend these results by asking to what extent reading is well-modeled as rational behavior at a finer level of analysis, predicting not aggregate measures, but the duration and location of each fixation. We present a new rational model of eye movement control in reading, the central assumption of which is that eye movement decisions are made to obtain noisy visual information as the reader performs Bayesian inference on the identities of the words in the sentence. As a case study, we present two simulations demonstrating that the model gives a rational explanation for between-word regressions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.