2020
DOI: 10.1037/xhp0000856
|View full text |Cite
|
Sign up to set email alerts
|

Is zjudge a better prime for JUDGE than zudge is?: A new evaluation of current orthographic coding models.

Abstract: Three masked priming paradigms, the conventional masked priming lexical decision task (Forster & Davis, 1984), the sandwich priming task (Lupker & Davis, 2009) and the masked priming same-different task (Norris & Kinoshita, 2008) were used to investigate priming for a given target (e.g., JUDGE) from primes created by either adding a letter to the beginning of the target (e.g., zjudge) or replacing the target's initial letter (e.g., zudge). Virtually all models of orthographic coding that allow calculation of o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 65 publications
(164 reference statements)
0
4
0
Order By: Relevance
“…Previous studies on transpositions and substitutions have mainly focused on how preactivation from consistent primes affected target processing. However, there is accumulating evidence that inhibition from inconsistent information is equally important and should be given more attention in models of orthographic processing (e.g., Lupker et al, 2020). Clearly, inhibition dynamics might also be especially important to explain changes in priming effects throughout development (Kezilas et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies on transpositions and substitutions have mainly focused on how preactivation from consistent primes affected target processing. However, there is accumulating evidence that inhibition from inconsistent information is equally important and should be given more attention in models of orthographic processing (e.g., Lupker et al, 2020). Clearly, inhibition dynamics might also be especially important to explain changes in priming effects throughout development (Kezilas et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…Because, in the current version of lme4 (the R package used for running the GLMMs), convergence failures for GLMMs are frequent (although many of those failures reflect false positives: Bolker, 2022), in order to limit the occurrence of failures, we kept the random structure of the model as simple as possible by using only random intercepts for subjects and items. For the same reason, the model was run increasing the maximum number of iterations to 1 million and using the BOBYQA optimizer, an optimizer that typically returns estimates that are equivalent to those returned by lme4's default optimizer, but produces fewer convergence failures (see, e.g., Colombo et al, 2020;Lupker et al, 2020aLupker et al, , 2020b. 3 Prior to running the model, R-default treatment contrasts were changed to sum-to-zero contrasts (i.e., contr.sum) to help interpret lower-order effects in the presence of higher-order interactions (Singmann & Kellen, 2019).…”
Section: Resultsmentioning
confidence: 99%
“…The data were analysed using generalised linear mixedeffects models (GLMMs) in R version 4.1.1 (R Core Team, 2021) with the lme4 package version 1.1.27.1 (Bates et al, 2015). GLMMs were used because they do not require the assumption of normal RT distribution, thus making it possible to analyse untransformed RT data (Lo & Andrews, 2015;Lupker et al, 2020).…”
Section: Resultsmentioning
confidence: 99%