Two-choice response times are a common type of data, and much research has been devoted to the development of process models for such data. However, the practical application of these models is notoriously complicated, and flexible methods are largely nonexistent. We combine a popular model for choice response times-the Wiener diffusion process-with techniques from psychometrics in order to construct a hierarchical diffusion model. Chief among these techniques is the application of random effects, with which we allow for unexplained variability among participants, items, or other experimental units. These techniques lead to a modeling framework that is highly flexible and easy to work with. Among the many novel models this statistical framework provides are a multilevel diffusion model, regression diffusion models, and a large family of explanatory diffusion models. We provide examples and the necessary computer code.Keywords: response time, psychometrics, hierarchical, random effects, diffusion model Supplemental materials: http://dx.doi.org/10.1037/a0021765.suppIn his 1957 presidential address at the 65th annual business meeting of the American Psychological Association, Lee Cronbach drew a captivating sketch of the state of psychology at the time. He focused on the two distinct disciplines that then existed in the field of scientific psychology. On the one side, there was the experimental discipline, which concerned itself with the systematic manipulation of conditions in order to observe the consequences. On the other side, there was the correlational discipline, which focused on the study of preexisting differences between individuals or groups. Cronbach saw many potential contributions of these disciplines to one another and argued that the time and opportunity had come for the two dissociated fields to crossbreed: "We are free at last to look up from our own bedazzling treasure, to cast properly covetous glances upon the scientific wealth of our neighbor discipline. Trading has already been resumed, with benefit to both parties" (Cronbach, 1957, p. 675). Two decades onward, Cronbach (1975) saw the hybrid discipline flourishing across several domains.In the area of measurement of psychological processes, there exists a schism similar to the one Cronbach pointed out in his presidential address. Psychological measurement and individual differences are studied in the domain of psychometrics, whereas cognitive processes are the stuff of the more nomothetic mathematical psychology. In both areas, statistical models are used extensively. There are common models based on the (general) linear model, such as analysis of variance (ANOVA) and regression, but we focus on more advanced, nonlinear techniques.Experimental psychology has, for a long time, made use of process models to describe interesting psychological phenomena in various fields. Some famous examples are Sternberg's (1966) sequential exhaustive search model for visual search and memory scanning, Atkinson and Shiffrin's (1968) multistore model for memory,...
We revisit the results of the recent Reproducibility Project: Psychology by the Open Science Collaboration. We compute Bayes factors—a quantity that can be used to express comparative evidence for an hypothesis but also for the null hypothesis—for a large subset (N = 72) of the original papers and their corresponding replication attempts. In our computation, we take into account the likely scenario that publication bias had distorted the originally published results. Overall, 75% of studies gave qualitatively similar results in terms of the amount of evidence provided. However, the evidence was often weak (i.e., Bayes factor < 10). The majority of the studies (64%) did not provide strong evidence for either the null or the alternative hypothesis in either the original or the replication, and no replication attempts provided strong evidence in favor of the null. In all cases where the original paper provided strong evidence but the replication did not (15%), the sample size in the replication was smaller than the original. Where the replication provided strong evidence but the original did not (10%), the replication sample size was larger. We conclude that the apparent failure of the Reproducibility Project to replicate many target effects can be adequately explained by overestimation of effect sizes (or overestimation of evidence against the null hypothesis) due to small sample sizes and publication bias in the psychological literature. We further conclude that traditional sample sizes are insufficient and that a more widespread adoption of Bayesian methods is desirable.
According to the principle of parsimony, model selection methods should value both descriptive accuracy and simplicity. Here we focus primarily on Bayes factors and minimum description length, explaining how these procedures strike a balance between goodness-of-fit and parsimony. Throughout, we demonstrate the methods with an application on false memory, evaluating three competing multimonial proces tree models of interference in memory.
Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors hinge upon the validity of the models' parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants' behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler's degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.