Evidence accumulation models (EAMs) – the dominant modelling framework for speeded decision-making – have become an important tool for model application. Model application involves using specific model to estimate parameter values that relate to different components of the cognitive process, and how these values differ over experimental conditions and/or between groups of participants. In this context, researchers are often agnostic to the specific theoretical assumptions made by different EAM variants, and simply desire a model that will provide them with an accurate measurement of the parameters that they are interested in. However, recent research has suggested that the two most commonly applied EAMs – the diffusion model and the linear ballistic accumulator (LBA) – come to fundamentally different conclusions when applied to the same empirical data. The current study provides an in-depth assessment of the measurement properties of the two models, as well as the mapping between, using two large scale simulation studies and a reanalysis of Evans (2020a). Importantly, the findings indicate that there is a major identifiability issue within the standard LBA, where differences in decision threshold between conditions are practically unidentifiable, which appears to be caused by a tradeoff between the threshold parameter and the overall drift rate across the different accumulators. While this issue can be remedied by placing some constraint on the overall drift rate across the different accumulators – such as constraining the average drift rate or the drift rate of one accumulator to have the same value in each condition – these constraints can qualitatively change the conclusions of the LBA regarding other constructs, such as non-decision time. Furthermore, all LBA variants considered in the current study still provide qualitatively different conclusions to the diffusion model. Importantly, the current findings suggest that researchers should not use the unconstrained version of the LBA for model application, and bring into question the conclusions of previous studies using the unconstrained LBA.
In a sequential hypothesis test, the analyst checks at multiple steps during data collectionwhether sufficient evidence has accrued to make a decision about the tested hypotheses.As soon as sufficient information has been obtained, data collection is terminated. Here,we compare two sequential hypothesis testing procedures that have recently been proposedfor use in psychological research: the Sequential Probability Ratio Test (SPRT; Schnuerch& Erdfelder, 2020) and the Sequential Bayes Factor Test (SBFT; Schönbrodt et al., 2017).We show that although the two methods have been presented as distinct methodologies inthe past, they share many similarities and can even be regarded as two instances of thesame overarching hypothesis testing framework. We demonstrate that the two methods usethe same mechanisms for evidence monitoring and error control, and that differences inefficiency between the methods depend on the exact specification of the statistical modelsinvolved. Given the close relationship between the SPRT and SBFT, we argue that thechoice of the sequential testing method should be regarded as a continuous choice withina unified framework rather than a dichotomous choice between two methods. We presentseveral considerations researchers can make to navigate the design decisions in the SPRTand SBFT.
Psychology research has become increasingly focused on creating formalized models of psychological processes, which can make exact quantitative predictions about observed data that are the result of some unknown psychological process, allowing a better understanding of how psychological processes may actually operate. However, using models to understand psychological processes comes with an additional challenge: how do we select the best model from a range of potential models that all aim to explain the same psychological process? A recent article by Navarro (2018; Computational Brain & Behavior ) provided a detailed discussion on several broad issues within the area of model selection,with Navarro suggesting that "one of the most important functions of a scientific theory is ... to encourage directed exploration of new territory" (p.3), that "understanding how the qualitative patterns in the empirical data emerge naturally from a computational model of a psychological process is often more scientifically useful than presenting a quantified measure of its performance" (p.6), and that "quantitative measures of performance are essentially selecting models based on their ancillary assumptions" (p.6). Here, I provide a critique of several of Navarro's points on these broad issues. In contrast to Navarro, I argue that all possible data should be considered when evaluating a process model (i.e., not just data from novel contexts), that quantitative model selection methods provide a more principled and complete method of selecting between process models than visual assessments of qualitative trends, and that the idea of ancillary assumptions that are not part of the core explanation in the model is a slippery slope to an infinitely flexible model.
Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the non-linearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work we show how these challenges can be addressed through a combination of Bayesian hierarchical modelling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study.
With the advancement of technologies like in-car navigation and smartphones, concerns around how cognitive functioning is influenced by ``workload'' are increasingly prevalent. Research shows that spreading effort across multiple tasks can impair cognitive abilities through an overuse of resources, and that similar overload effects arise in difficult single-task paradigms. We developed a novel lab-based extension of the Detection Response Task, which measures workload, and paired it with a Multiple Object Tracking Task to manipulate cognitive load. Load was manipulated either by changing within-task difficulty or by the addition of an extra task. Using quantitative cognitive modelling we showed that these manipulations cause similar cognitive impairments through diminished processing rates, but that the introduction of a second task tends to invoke more cautious response strategies that do not occur when only difficulty changes. We conclude that more prudence should be exercised when directly comparing multitasking and difficulty-based workload impairments, particularly when relying on measures of central tendency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.