2020
DOI: 10.1037/rev0000192
|View full text |Cite
|
Sign up to set email alerts
|

Systematic and random sources of variability in perceptual decision-making: Comment on Ratcliff, Voskuilen, and McKoon (2018).

Abstract: A key assumption of models of human cognition is that there is variability in information processing. Evidence accumulation models (EAMs) commonly assume two broad variabilities in information processing: within-trial variability, which is thought to reflect moment-to-moment fluctuations in perceptual processes, and between-trial variability, which is thought to reflect variability in slower-changing processes like attention, or systematic variability between the stimuli on different trials. Recently, Ratcliff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 57 publications
0
14
0
Order By: Relevance
“…Although between-trial variability in processing components is plausible, we believe that a large contribution of between-trial variability to the fit quality of a model is problematic, as there is generally no explanation of why this variability occurs or why it has the parametric form researchers assume to represent it (see the modeling of Experiment 2 for an exception regarding between-trial variability in drift rate that arises from theoretical properties of the approximate number system). In this view, between-trial variability essentially corresponds to adding a random component to the model without any strong theoretical motivation for it rather than to improve the fit quality (Evans, Tillman, et al, 2020). Consequently, we consider our findings regarding between-trial variability as additional evidence for DSDTDM.…”
Section: Comparisons With Dtdmmentioning
confidence: 87%
“…Although between-trial variability in processing components is plausible, we believe that a large contribution of between-trial variability to the fit quality of a model is problematic, as there is generally no explanation of why this variability occurs or why it has the parametric form researchers assume to represent it (see the modeling of Experiment 2 for an exception regarding between-trial variability in drift rate that arises from theoretical properties of the approximate number system). In this view, between-trial variability essentially corresponds to adding a random component to the model without any strong theoretical motivation for it rather than to improve the fit quality (Evans, Tillman, et al, 2020). Consequently, we consider our findings regarding between-trial variability as additional evidence for DSDTDM.…”
Section: Comparisons With Dtdmmentioning
confidence: 87%
“…The third elaboration, which completes the specification of the full DDM that we focus on here, is uniform across-trial variability in non-decision time with mean t 0 and range s t (Ratcliff & Tuerlinckx, 2002). Although these three elaborations make the DDM a realistic model of choice RT, the across-trial variability parameters can be hard to estimate (Boehm, Annis, et al, 2018;Evans et al, 2020;Lerche et al, 2017;van Ravenzwaaij & Oberauer, 2009). Distribution functions for the full DDM can be obtained analytically for across-trial drift variability and through numerical integration over the non-decision time and start-point distributions.…”
Section: Bayesian Hierarchical Ddmsmentioning
confidence: 93%
“…It should also be noted that the diffusion framework often includes three additional parameters for the between-trial variability in drift rate, starting point, and nondecision time, respectively (Ratcliff, 1978; Ratcliff & Rouder, 1998; Ratcliff & Tuerlinckx, 2002), and that these parameters can be integrated into the DMC framework (Evans & Servant, 2020). However, these between-trial variability parameters can compromise the measurement properties of the model (Boehm et al, 2018; Evans, Tillman, & Wagenmakers, 2020; Lerche et al, 2017; Lerche & Voss, 2016; van Ravenzwaaij & Oberauer, 2009), which would prevent us from robustly disentangling facilitation and interference effects.…”
Section: Model Developmentmentioning
confidence: 99%