2023
DOI: 10.1016/j.cogpsych.2023.101598
|View full text |Cite
|
Sign up to set email alerts
|

How trial-to-trial learning shapes mappings in the mental lexicon: Modelling lexical decision with linear discriminative learning

Maria Heitmeier,
Yu-Ying Chuang,
R. Harald Baayen
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 155 publications
0
10
0
Order By: Relevance
“…As fixed effects we entered Group (two levels: [−unlearning], [+unlearning]), Category (two levels: diminutive, plural) and Sequence-the trial order during the test as well as their interactions (pairwise and three way). We included Sequence because test trial order has been shown to affect learning (Heitmeier, Chuang, & Baayen, 2023). A change in accuracy in later trials may reflect either participants' change in confidence or a reevaluation of their cue-outcome mappings.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As fixed effects we entered Group (two levels: [−unlearning], [+unlearning]), Category (two levels: diminutive, plural) and Sequence-the trial order during the test as well as their interactions (pairwise and three way). We included Sequence because test trial order has been shown to affect learning (Heitmeier, Chuang, & Baayen, 2023). A change in accuracy in later trials may reflect either participants' change in confidence or a reevaluation of their cue-outcome mappings.…”
Section: Resultsmentioning
confidence: 99%
“…While semantic outcomes are often discrete (Kapatsinski, 2023b), recent studies have moved away from discrete representations. Nieder et al (2023), for example, use continuous cue-outcome representations to model Maltese inflection, while Heitmeier et al (2023) successfully modeled trial-by-trial effects of a lexical decision experiment in the same way. These studies use one-hot encoded vectors to represent phonology, and word embeddings to represent meaning, replacing the Rescorla-Wagner equations with the similar, but computationally more powerful Widrow-Hoff delta rule (Widrow & Hoff, 1960).…”
Section: Implications For Natural Language Learningmentioning
confidence: 99%
“…In first language acquisition, Nixon and Tomaschek (2020, 2021) present a computational model of early infants' learning of speech by using the incoming acoustic signal to predict upcoming acoustic signal. Apart from phonetic learning, error‐driven learning has also been found to play a role in word learning (Ramscar et al., 2013a; Ramscar et al., 2010; Ramscar et al., 2011), morphological learning (Hoppe et al., 2020; Ramscar & Yarlett, 2007; Ramscar et al., 2013b; Tomaschek et al., 2019), lexical decision (Heitmeier, Chuang, & Baayen, 2023), speech production (Tucker et al., 2019; Tomaschek et al., 2019; Tomaschek & Ramscar, 2022), and speech perception (Arnold et al., 2017; Shafaei‐Bajestan & Baayen, 2018). There is also evidence for trial‐by‐trial error‐driven learning in the brain (Lentz et al., 2021).…”
Section: Discussionmentioning
confidence: 99%
“…The initial stage of speech production is modeled as involving a mapping in the opposite direction, starting with a high-dimensional semantic vector (known as embeddings in computational linguistics) and targeting a vector specifying which phone combinations drive articulation. The DLM has been successful in modeling a range of different morphological systems (e.g., Chuang et al, 2020 , 2022 ; Denistia and Baayen, 2021 ; Heitmeier et al, 2021 ; Nieder et al, 2023 ) as well as behavioral data such as acoustic durations (Schmitz et al, 2021 ; Stein and Plag, 2021 ; Chuang et al, 2022 ), (primed) lexical decision reaction times (Gahl and Baayen, 2023 ; Heitmeier et al, 2023b ), and data from patients with aphasia (Heitmeier and Baayen, 2020 ).…”
Section: Introductionmentioning
confidence: 99%
“…Recent modeling efforts with the DLM have been limited by the disadvantages of EL and WHL: they either had to opt for EL, which resulted in models that were not informed about word frequencies (e.g., Heitmeier et al, 2023b ), or for WHL, which limited the amount of data the models could be trained on (e.g., Chuang et al, 2021 ; Heitmeier et al, 2021 ). The present paper aims to solve this problem by introducing a new method for computing the mapping matrices that takes frequency of use into account but is computationally efficient by making use of a numerically efficiently solvable solution: “Frequency-informed learning” (FIL).…”
Section: Introductionmentioning
confidence: 99%