2022
DOI: 10.3389/fpsyg.2022.741321
|View full text |Cite
|
Sign up to set email alerts
|

Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition?

Abstract: It has been shown that Recurrent Artificial Neural Networks automatically acquire some grammatical knowledge in the course of performing linguistic prediction tasks. The extent to which such networks can actually learn grammar is still an object of investigation. However, being mostly data-driven, they provide a natural testbed for usage-based theories of language acquisition. This mini-review gives an overview of the state of the field, focusing on the influence of the theoretical framework in the interpretat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 43 publications
0
3
0
Order By: Relevance
“…From a usage-based viewpoint, Diessel (2013b) stresses the importance of deictic pointing and joint attention as (extralinguistic) language acquisition fac- The non-improvements added by the curriculum approach also further add to the debate on what language models mean for linguistic theory. For example, Pannitto and Herbelot (2022) and Piantadosi (2023) have stressed the anti-Chomskyan evidence provided by the successes of language models. Curriculum learning looks like an obvious choice when trying to implement usage-based findings in the training process for (smaller) language models.…”
Section: Discussionmentioning
confidence: 99%
“…From a usage-based viewpoint, Diessel (2013b) stresses the importance of deictic pointing and joint attention as (extralinguistic) language acquisition fac- The non-improvements added by the curriculum approach also further add to the debate on what language models mean for linguistic theory. For example, Pannitto and Herbelot (2022) and Piantadosi (2023) have stressed the anti-Chomskyan evidence provided by the successes of language models. Curriculum learning looks like an obvious choice when trying to implement usage-based findings in the training process for (smaller) language models.…”
Section: Discussionmentioning
confidence: 99%
“…RNN is more effective in learning nonlinear characteristics of sequences because they share parameters, are remembered, and are Turing complete. RNN is first used to describe the relationship between a sequence's current output and its past information [ 25 , 26 ].…”
Section: Models and Evaluation Methodsmentioning
confidence: 99%
“…On the other hand, deriving the linguistic structure from the input structure itself requires investigating the two aspects together. So far, most studies on NLMs have disregarded the effect of the input on experimental results (Pannitto and Herbelot, 2022).…”
Section: Nativist Vs Non-nativist Approaches To Language Acquisitionmentioning
confidence: 99%