The Akaike information criterion (AIC;Akaike, 1973 (e.g., Akaike, 1978(e.g., Akaike, , 1979Bozdogan, 1987;Burnham & Anderson, 2002) The evaluation of competing hypotheses is central to the process of scientific inquiry. When the competing hypotheses are stated in the form of predictions from quantitative models, their adequacy with respect to observed data can be rigorously assessed. Given K plausible candidate models of the underlying process that has generated the observed data, we should like to know which hypothesis or model approximates the "true" process best. More generally, we should like to know how much statistical evidence the data provide for each of the K models, preferably in terms of likelihood (Royall, 1997) or the probability of each of the models' being correct (or the most correct, because the generating model may never be known for certain). The process of evaluating candidate models is termed model selection or model evaluation.A straightforward solution to the problem of evaluating several candidate models is to select the model that gives the most accurate description of the data. However, the process of model evaluation is complicated by the fact that a model with many free parameters is more flexible than a model with only a few parameters. It is clearly not desirable to always deem the most complex model the best, and it is generally accepted that the best model is the one that provides an adequate account of the data We thank Han van der Maas and In Jae Myung for helpful comments on an earlier draft of this paper. Correspondence concerning this article can be addressed to E.-J. Wagenmakers,
This article introduces a new computational model for the complex-span task, the most popular task for studying working memory. SOB-CS is a two-layer neural network that associates distributed item representations with distributed, overlapping position markers. Memory capacity limits are explained by interference from a superposition of associations. Concurrent processing interferes with memory through involuntary encoding of distractors. Free time in-between distractors is used to remove irrelevant representations, thereby reducing interference. The model accounts for benchmark findings in four areas: (1) effects of processing pace, processing difficulty, and number of processing steps; (2) effects of serial position and error patterns; (3) effects of different kinds of item-distractor similarity; and (4) correlations between span tasks. The model makes several new predictions in these areas, which were confirmed experimentally.
Recent analyses of serial correlations in cognitive tasks have provided preliminary evidence of the presence of a particular form of long-range serial dependence known as 1/f noise. It has been argued that long-range dependence has been largely ignored in mainstream cognitive psychology even though it accounts for a substantial proportion of variability in behavior (see, e.g., Gilden, 1997Gilden, , 2001. In this article, we discuss the defining characteristics of long-range dependence and argue that claims about its presence need to be evaluated by testing against the alternative hypothesis of short-range dependence. For the data from three experiments, we accomplish such tests with autoregressive fractionally integrated moving-average time series modeling. We find that long-range serial dependence in these experiments can be explained by any of several mechanisms, including mixtures of a small number of short-range processes.
THEORETICAL AND REVIEW ARTICLES
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.