2017
DOI: 10.1016/j.jml.2017.04.003
|View full text |Cite
|
Sign up to set email alerts
|

Diffusion vs. linear ballistic accumulation: Different models, different conclusions about the slope of the zROC in recognition memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
59
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 52 publications
(61 citation statements)
references
References 81 publications
2
59
0
Order By: Relevance
“…Cross-trial variability in drift rate is analogous to variability in memory strength in SDT and enables the model to produce errors that are slower than correct responses, while variability in starting point enables the model to produce errors that are faster than correct responses. In recognition memory, errors are typically slower than correct responses and a fast error pattern has not usually been found (Ratcliff & Smith, 2004) and thus prior work suggests that starting point variability is unnecessary in recognition memory (Osth, Bora, et al, 2017). Thus, in our applications we fix s z = 0 in all cases.…”
Section: Relating Global Semantic Similarity To Recognition Performanmentioning
confidence: 99%
See 4 more Smart Citations
“…Cross-trial variability in drift rate is analogous to variability in memory strength in SDT and enables the model to produce errors that are slower than correct responses, while variability in starting point enables the model to produce errors that are faster than correct responses. In recognition memory, errors are typically slower than correct responses and a fast error pattern has not usually been found (Ratcliff & Smith, 2004) and thus prior work suggests that starting point variability is unnecessary in recognition memory (Osth, Bora, et al, 2017). Thus, in our applications we fix s z = 0 in all cases.…”
Section: Relating Global Semantic Similarity To Recognition Performanmentioning
confidence: 99%
“…A summary of the datasets can be seen in Table 2. In addition to spanning a range of study list lengths, the experiments also contain manipulations such as the number of presentations (Criss, 2010), encoding tasks (Kiliç, Criss, Malmberg, & Shiffrin, 2017), and speed-accuracy emphasis (Rae et al, 2014;Osth, Bora, et al, 2017). Within-list manipulations (either within study or test lists) mix levels within a list, such as having high frequency (HF) and low frequency (LF) words randomly ordered within both the study and test lists.…”
Section: The Model Fitmentioning
confidence: 99%
See 3 more Smart Citations