The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to guide the evaluation and selection of these models. This article introduces a method of selecting among mathematical models of cognition known as minimum description length, which provides an intuitive and theoretically well-grounded understanding of why one model should be chosen. A central but elusive concept in model selection, complexity, can also be derived with the method. The adequacy of the method is demonstrated in 3 areas of cognitive modeling: psychophysics, information integration, and categorization.How should one choose among competing theoretical explanations of data? This question is at the heart of the scientific enterprise, regardless of whether verbal models are being tested in an experimental setting or computational models are being evaluated in simulations. A number of criteria have been proposed to assist in this endeavor, summarized nicely by Jacobs and Grainger (1994). They include (a) plausibility (are the assumptions of the model biologically and psychologically plausible?); (b) explanatory adequacy (is the theoretical explanation reasonable and consistent with what is known?); (c) interpretability (do the model and its parts-e.g., parameters-make sense? are they understandable?); (d) descriptive adequacy (does the model provide a good description of the observed data?); (e) generalizability (does the model predict well the characteristics of data that will be observed in the future?); and (f) complexity (does the model capture the phenomenon in the least complex-i.e., simplest-possible manner?).The relative importance of these criteria may vary with the types of models being compared. For example, verbal models are likely to be scrutinized on the first three criteria just as much as the last three to thoroughly evaluate the soundness of the models and their assumptions. Computational models, on the other hand, may have already satisfied the first three criteria to a certain level of acceptability earlier in their evolution, leaving the last three criteria to be the primary ones on which they are evaluated. This emphasis on the latter three can be seen in the development of quantitative methods designed to compare models on these criteria. These methods are the topic of this article.In the last two decades, interest in mathematical models of cognition and other psychological processes has increased tremendously. We view this as a positive sign for the discipline, for it suggests that this method of inquiry holds considerable promise. Among other things, a mathematical instantiation of a theory provides a test bed in which researchers can examine the detailed interactions of a model's parts with a level of precision that is not possible with verbal models. Furthermore, through systematic eval...
Speech is produced over time, and this makes sensitivity to timing between speech events crucial for understanding language. Two experiments investigated whether perception of function words (e.g., or, are) is rate dependent in casual speech, which often contains phonetic segments that are spectrally quite reduced. In Experiment 1, talkers spoke sentences containing a target function word; slowing talkers' speech rate around this word caused listeners to perceive sentences as lacking the word (e.g., leisure or time was perceived as leisure time). In Experiment 2, talkers spoke matched sentences lacking a function word; speeding talkers' speech rate around the region in which the function word had been embedded in Experiment 1 caused listeners to perceive a function word that was never spoken (e.g., leisure time was perceived as leisure or time). The results suggest that listeners formed expectancies based on speech rate, and these expectancies influenced the number of words and word boundaries perceived. These findings may help explain the robustness of speech recognition when speech signals are distorted (e.g., because of a casual speaking style).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.