2010
DOI: 10.1007/978-3-642-12116-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Computational Models of Language Acquisition

Abstract: Abstract. Child language acquisition, one of Nature's most fascinating phenomena, is to a large extent still a puzzle. Experimental evidence seems to support the view that early language is highly formulaic, consisting for the most part of frozen items with limited productivity. Fairly quickly, however, children find patterns in the ambient language and generalize them to larger structures, in a process that is not yet well understood. Computational models of language acquisition can shed interesting light on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(4 citation statements)
references
References 46 publications
0
4
0
Order By: Relevance
“…Firstly, eye tracking and brain activity data (captured by functional magnetic resonance imaging, fMRI, and electroencephalography, EEG) proved useful for a wide range of tasks such as sentiment analysis (Gu et al, 2014;Mishra et al, 2018), relation extraction (McGuire & Tomuro, 2021), name entity recognition , and text simplification (Klerke et al, 2016). Secondly, cognitive theories of text comprehension and production have guided model design for grammar induction and constituency parsing (Levy et al, 2008;Wintner, 2010), machine translation (Saini & Sahula, 2021), common-sense reasoning (Sap et al, 2020), and training strategies involving regularization (Wei et al, 2021) and curriculum learning (Xu et al, 2020). Moreover, these theories played key roles in the understanding catastrophic forgetting during fine-tuning of neural networks (Arora, Rahimi, & Baldwin, 2019), better analysis of what neural networks comprehend (Ettinger, 2020;Dunietz et al, 2020), and better evaluation of generated text (van Der Lee et al, 2019),…”
Section: Cognitive Models For Nlp Tasksmentioning
confidence: 99%
“…Firstly, eye tracking and brain activity data (captured by functional magnetic resonance imaging, fMRI, and electroencephalography, EEG) proved useful for a wide range of tasks such as sentiment analysis (Gu et al, 2014;Mishra et al, 2018), relation extraction (McGuire & Tomuro, 2021), name entity recognition , and text simplification (Klerke et al, 2016). Secondly, cognitive theories of text comprehension and production have guided model design for grammar induction and constituency parsing (Levy et al, 2008;Wintner, 2010), machine translation (Saini & Sahula, 2021), common-sense reasoning (Sap et al, 2020), and training strategies involving regularization (Wei et al, 2021) and curriculum learning (Xu et al, 2020). Moreover, these theories played key roles in the understanding catastrophic forgetting during fine-tuning of neural networks (Arora, Rahimi, & Baldwin, 2019), better analysis of what neural networks comprehend (Ettinger, 2020;Dunietz et al, 2020), and better evaluation of generated text (van Der Lee et al, 2019),…”
Section: Cognitive Models For Nlp Tasksmentioning
confidence: 99%
“…The language learning process presented in this paper differs from many of the existing models [37,38] in several respects. First is that it assumes no a priori knowledge and no external 'guidance'.…”
Section: Language Thought and Meaningmentioning
confidence: 99%
“…Given the increasing number of modeling studies being conducted nowadays, it becomes more and more important to be able to assess their replicability, a goal for which this paper makes a humble contribution. Finally, although related works have appeared before and many did since then, Redington et al's (1998) study isto our knowledge (Frank 2011;Kaplan, Oudeyer & Bergen 2008;Seidenberg 1997;Wintner 2010;Yang 2012) and in this specific subjectthe first and most comprehensive computational study on the distributional properties of child directed speech and how it relates to language acquisition. In this regard, this model is a computational cognitive model.…”
Section: Introductionmentioning
confidence: 96%