2015
DOI: 10.3389/fpsyg.2015.01046
|View full text |Cite
|
Sign up to set email alerts
|

Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control

Abstract: Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
18
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 11 publications
(21 citation statements)
references
References 23 publications
2
18
1
Order By: Relevance
“…Both HSC and MDI behave as though they believe the environment to be changeable—in fact, at approximately once every 1/(1- ) = 1/(1-0.6) = 2.5 trials—instead of assuming the reward rates to be static, which was the “true” experimental design. The estimated ∩γ is smaller than values found in most other tasks, which tends to be around 0.7–0.8 (Yu and Cohen, 2009 ; Ide et al, 2013 ; Yu and Huang, 2014 ; Zhang et al, 2014 ; Ma and Yu, 2015 ), and may potentially be due to the longer inter-trial interval used in this task (temporal discounting may be influenced also by absolute time, not only discrete trials as assumed in DBM).…”
Section: Resultsmentioning
confidence: 72%
“…Both HSC and MDI behave as though they believe the environment to be changeable—in fact, at approximately once every 1/(1- ) = 1/(1-0.6) = 2.5 trials—instead of assuming the reward rates to be static, which was the “true” experimental design. The estimated ∩γ is smaller than values found in most other tasks, which tends to be around 0.7–0.8 (Yu and Cohen, 2009 ; Ide et al, 2013 ; Yu and Huang, 2014 ; Zhang et al, 2014 ; Ma and Yu, 2015 ), and may potentially be due to the longer inter-trial interval used in this task (temporal discounting may be influenced also by absolute time, not only discrete trials as assumed in DBM).…”
Section: Resultsmentioning
confidence: 72%
“…In a second experiment, 8- and 9-year-olds' math achievement was not uniquely predicted by magnitude comparison accuracy, over and above inhibitory control. The authors therefore argued that the relationship between the ability to compare magnitudes and math achievement could be accounted for by individual differences in inhibitory control (but see Keller and Libertus, 2015 , for contradictory findings).…”
Section: Introductionmentioning
confidence: 99%
“…by discretizing belief state space [19] or averaging over possible change point times [1]) is both difficult and (thus) unnecessary for making Bayes-optimal predictions. In practice, implementing the m-ary DBM by discretization of the belief state-space is common practice [19,22,8,7], which has a computational and representational complexity of O(e km ) per observation, where k depends on fineness of the discretization, while the near exact-Bayes approximation EXP is only O(m).…”
Section: Discussionmentioning
confidence: 99%
“…While the analytical relationship we derive between DBM and EXP breaks down when change points are rare (α ≈ 1), this seems to be irrelevant in tasks where human behavior has been shown to be fitted well by DBM, since fitted α ranges between 0.7 and 0.8 [19,22,23,8,7]). Finally, we note that our approximation technique does not preclude an approximation in which the learning rate is modulated from trial to trial.…”
Section: Discussionmentioning
confidence: 99%