2019
DOI: 10.1016/j.actpsy.2019.05.010
|View full text |Cite
|
Sign up to set email alerts
|

Same same but different? Modeling N-1 switch cost and N-2 repetition cost with the diffusion model and the linear ballistic accumulator model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 55 publications
0
7
0
Order By: Relevance
“…However, we note that since gate opening and closing both (necessarily) involve switching between trial types, these measures may include effects related to task switching, such as task-set reconfiguration and proactive interference from previously active sets (we note that our updating and comparison measures were based only on no-switch trials and so were not contaminated by potential switching effects [4,60,61]). This would go some way toward explaining why both gating measures (and switching) were associated with lower drift rates, since interference from previously active sets and noise in the retrieved set would be expected to reduce the quality of decision processing on switch trials [62]. We also note that the larger costs associated with closing versus opening the gate (i.e., when switching to maintenance mode) provide further support for the PBWM's assumption that WM sits in maintenance (gate-closed) mode by default.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we note that since gate opening and closing both (necessarily) involve switching between trial types, these measures may include effects related to task switching, such as task-set reconfiguration and proactive interference from previously active sets (we note that our updating and comparison measures were based only on no-switch trials and so were not contaminated by potential switching effects [4,60,61]). This would go some way toward explaining why both gating measures (and switching) were associated with lower drift rates, since interference from previously active sets and noise in the retrieved set would be expected to reduce the quality of decision processing on switch trials [62]. We also note that the larger costs associated with closing versus opening the gate (i.e., when switching to maintenance mode) provide further support for the PBWM's assumption that WM sits in maintenance (gate-closed) mode by default.…”
Section: Discussionmentioning
confidence: 99%
“…Future work may thus profit from integrating a PBWM-like learning mechanism into our evidence accumulation framework to obtain finer control over the temporal dynamics of referenceback performance (e.g., by having threshold and/or reactive control settings vary from trial to trial as a function of learning; see [65,66] for examples of such an approach in the domain of instrumental learning). In addition, we speculate that some of the minor misfits (e.g., to empirical switching and comparison costs) of our model were likely due to certain sequential or 'carry-over' effects that are unaccounted for in the current framework, such as proactive interference, priming, task-set inertia/reconfiguration, and Gratton effects arising from previously encountered stimuli and responses [11,42,43,62,[67][68][69][70][71][72]. Due to our limited number of trials per subject, we were unable to conduct a thorough model-based analysis of sequential effects in the reference-back task.…”
Section: Discussionmentioning
confidence: 99%
“…To discuss the implications of these concerns, it is important to distinguish between studies that employ mean performance measures of cognitive control (i.e., performance in a specific condition or averaged across different conditions in an experimental task) and studies that employ difference score measures of cognitive control (i.e., difference scores or slope estimates between at least two experimental conditions). Both mean performance and difference score measures are typically calculated based on the average number of correct responses and/or averaged response times (RTs) across experimental trials (Draheim, Mashburn, Martin, & Engle, 2019), but they can in principle also be calculated on the basis of other performance measures such as model-based estimates of information-processing (Frischkorn & Schubert, 2018; Hartmann, Rey-Mermet, & Gade, 2019).…”
Section: Problems In the Measurement Of Cognitive Controlmentioning
confidence: 99%
“…calculated based on the average number of correct responses and/or averaged response times (RTs) across experimental trials , but they can in principle also be calculated on the basis of other performance measures such as model-based estimates of informationprocessing Hartmann, Rey-Mermet, & Gade, 2019). Mean performance measures of cognitive control typically show high reliabilities but questionable validities.…”
Section: Limitations Of Behavioral Measures Of Cognitive Controlmentioning
confidence: 99%
“…However, to infer stronger causal relationships between variables under study, future research is strongly recommended to employ experimental and longitudinal designs. Furthermore, we assessed n − 1 switch and n − 2 repetition cost in a single task-switching session, which may reduce the involvement of inhibitory processes ( Hartmann et al, 2019 ). Moreover, our study comprised a convenience and small sample of university students.…”
Section: Discussionmentioning
confidence: 99%