2022
DOI: 10.15398/jlm.v10i1.274
|View full text |Cite
|
Sign up to set email alerts
|

Learning Reduplication with a Neural Network that Lacks Explicit Variables

Abstract: Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 8 publications
1
5
0
Order By: Relevance
“…The results of our pretraining analysis underscored the substantial impact of prior knowledge, as models pretrained on syllables exhibited remarkable performance improvements, demonstrating that pretraining not only improves training accuracy but also enables models to excel on novel data. This finding resonates with prior research highlighting the influence of prior knowledge in the context of generative rule learning (Seidenberg & Elman, 1999a, b;Altmann, 2002;Geiger et al, 2022;Prickett et al, 2022) and offers valuable insights into the learning dynamics of neural network models. These insights can potentially be extended to the understanding of early language acquisition in infants.…”
Section: Discussionsupporting
confidence: 88%
See 3 more Smart Citations
“…The results of our pretraining analysis underscored the substantial impact of prior knowledge, as models pretrained on syllables exhibited remarkable performance improvements, demonstrating that pretraining not only improves training accuracy but also enables models to excel on novel data. This finding resonates with prior research highlighting the influence of prior knowledge in the context of generative rule learning (Seidenberg & Elman, 1999a, b;Altmann, 2002;Geiger et al, 2022;Prickett et al, 2022) and offers valuable insights into the learning dynamics of neural network models. These insights can potentially be extended to the understanding of early language acquisition in infants.…”
Section: Discussionsupporting
confidence: 88%
“…Subsequent attempts to model Marcus et al’s (1999) human data using variable-free network models have met with varying degrees of success. This work has shown that model performance is influenced by various factors, including pretraining (whether the model has any prior knowledge about phonemes, syllables or any abstract relations that will help the model to figure out the task at hand) (Seidenberg & Elman, 1999a,b; Altmann, 2002), encoding assumptions (whether the model is trained on input vectors that represent phonetic features, place of articulation, vowel height, primary/secondary stress or non-featural random vectors) (Negishi, 1999; Christiansen & Curtin, 1999; Christiansen, Conway, & Curtin, 2000; Dienes, Altmann, & Gao, 1999; Altmann & Dienes, 1999; Shultz & Bale, 2001; Geiger et al, 2022), and model type (whether the model is a neural network, autoencoder trained with cascade-correlation, auto-associater, Bayesian, Echo State Network or Seq2Seq) (Shultz, 1999; Sirois, Buckingham, & Shultz, 2000; Frank and Tenenbaum, 2011; Alhama and Zuidema, 2018; Prickett et al, 2022), and task (whether the task is to predict or identify rules, words, syllables, or patterns, or segment syllable sequences into “words”) (Seidenberg & Elman, 1999a, 1999b; Christiansen & Curtin, 1999;) (see Alhama and Zuidema (2019) for a detailed review of the computational models). These factors have made it challenging to draw direct comparisons with human behavior, further fueling the ongoing discussion.…”
Section: Generativity Of Humans and Computational Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Formalizing theories of reduplication raises the issue of how theories represent the idea of not only copying segments, but also the sequencing of repeated segments within a reduplicative construction. These issues bear on the role of sequencing and repetition in cognition (Endress et al 2007;Buzsáki & Tingley 2018;Moreton et al 2021), as well as how machine learning methods infer these constructions (Dolatian & Heinz 2018a;Beguš 2021;Nelson et al 2020;Prickett et al 2022).…”
Section: Introductionmentioning
confidence: 99%