Parameter estimation in evidence-accumulation models of choice response times is demanding of both the data and the user. We outline how to fit evidence-accumulation models using the flexible, open-source, R-based Dynamic Models of Choice (DMC) software. DMC provides a hands-on introduction to the Bayesian implementation of two popular evidence-accumulation models: the diffusion decision model (DDM) and the linear ballistic accumulator (LBA). It enables individual and hierarchical estimation, as well as assessment of the quality of a model's parameter estimates and descriptive accuracy. First, we introduce the basic concepts of Bayesian parameter estimation, guiding the reader through a simple DDM analysis. We then illustrate the challenges of fitting evidence-accumulation models using a set of LBA analyses. We emphasize best practices in modeling and discuss the importance of parameter- and model-recovery simulations, exploring the strengths and weaknesses of models in different experimental designs and parameter regions. We also demonstrate how DMC can be used to model complex cognitive processes, using as an example a race model of the stop-signal paradigm, which is used to measure inhibitory ability. We illustrate the flexibility of DMC by extending this model to account for mixtures of cognitive processes resulting from attention failures. We then guide the reader through the practical details of a Bayesian hierarchical analysis, from specifying priors to obtaining posterior distributions that encapsulate what has been learned from the data. Finally, we illustrate how the Bayesian approach leads to a quantitatively cumulative science, showing how to use posterior distributions to specify priors that can be used to inform the analysis of future experiments.
h i g h l i g h t s• We develop a new across-subjects Bayesian state-trace analysis.• We present an improved computational method for Bayesian state-trace analysis.• We apply our new methods to an existing data set. a b s t r a c tState-trace methods have recently been advocated for exploring the latent dimensionality of psychological processes. These methods rely on assessing the monotonicity of a set of responses embedded within a state-space. proposed Bayes factors for state-trace analysis, allowing the assessment of the evidence for monotonicity within individuals. Under the assumption that the population is homogeneous, these Bayes factors can be combined across participants to produce a ''group'' Bayes factor comparing the monotone hypothesis to the non-monotone hypothesis. However, combining information across individuals without assuming homogeneity is problematic due to the nonparametric nature of state-trace analysis. We introduce group-level Bayes factors that can be used to assess the evidence that the population is homogeneous vs. heterogeneous, and demonstrate their utility using data from a visual change-detection task. Additionally, we describe new computational methods for rapidly computing individual-level Bayes factors. (C.P. Davis-Stober).Bayes factors that we develop to address the question of monotonicity, and hence the dimensionality, of binary dependent variables, are based on the work of Klugkist, Laudy, and Hoijtink (2005) as applied to state-trace analysis by Prince, Brown, and Heathcote (2012). The aim of Prince et al.'s approach is to preserve the essentially non-parametric nature of state-trace analysis by making only fairly minimal statistical assumptions, the main one being that the dependent variable has a binomial distribution.Prince, Hawkins, Love, and Heathcote's (2012) approach primarily focused on separate analyses of each participant's data, as they showed that state-trace analysis could be invalid when applied to data averaged over participants. In particular, they provided examples where the average of two monotonic relationships is non-monotonic. For inference at the group level, they suggested taking the product of individual participant's Bayes factors, but acknowledged the weakness in two necessary underlying assumptions: (1) that participants are either all of one type (e.g., monotonic) or another (e.g., non-monotonic) and (2) that the participants are entirely unrelated. At first sight, hierarchical http://dx.
The stop-signal paradigm has become ubiquitous in investigations of inhibitory control. Tasks inspired by the paradigm, referred to as stop-signal tasks, require participants to make responses on go trials and to inhibit those responses when presented with a stop-signal on stop trials. Currently, the most popular version of the stop-signal task is the ‘choice-reaction’ variant, where participants make choice responses, but must inhibit those responses when presented with a stop-signal. An alternative to the choice-reaction variant of the stop-signal task is the ‘anticipated response inhibition’ task. In anticipated response inhibition tasks, participants are required to make a planned response that coincides with a predictably timed event (such as lifting a finger from a computer key to stop a filling bar at a predefined target). Anticipated response inhibition tasks have some advantages over the more traditional choice-reaction stop-signal tasks and are becoming increasingly popular. However, currently, there are no openly available versions of the anticipated response inhibition task, limiting potential uptake. Here, we present an open-source, free, and ready-to-use version of the anticipated response inhibition task, which we refer to as the OSARI (the Open-Source Anticipated Response Inhibition) task.
Human operators often experience large fluctuations in cognitive workload over seconds timescales that can lead to sub-optimal performance, ranging from overload to neglect. Adaptive automation could potentially address this issue, but to do so it needs to be aware of real-time changes in operators’ spare cognitive capacity, so it can provide help in times of peak demand and take advantage of troughs to elicit operator engagement. However, it is unclear whether rapid changes in task demands are reflected in similarly rapid fluctuations in spare capacity, and if so what aspects of responses to those demands are predictive of the current level of spare capacity. We used the ISO standard detection response task (DRT) to measure cognitive workload approximately every 4 s in a demanding task requiring monitoring and refueling of a fleet of simulated unmanned aerial vehicles (UAVs). We showed that the DRT provided a valid measure that can detect differences in workload due to changes in the number of UAVs. We used cross-validation to assess whether measures related to task performance immediately preceding the DRT could predict detection performance as a proxy for cognitive workload. Although the simple occurrence of task events had weak predictive ability, composite measures that tapped operators’ situational awareness with respect to fuel levels were much more effective. We conclude that cognitive workload does vary rapidly as a function of recent task events, and that real-time predictive models of operators’ cognitive workload provide a potential avenue for automation to adapt without an ongoing need for intrusive workload measurements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.