2018
DOI: 10.1162/tacl_a_00247
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate

Abstract: Can advances in NLP help advance cognitive modeling? We examine the role of artificial neural networks, the current state of the art in many common NLP tasks, by returning to a classic case study. In 1986, Rumelhart and McClelland famously introduced a neural architecture that learned to transduce English verb stems to their past tense forms. Shortly thereafter, Pinker and Prince (1988) presented a comprehensive rebuttal of many of Rumelhart and McClelland's claims. Much of the force of their attack centered o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
56
0
10

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 58 publications
(67 citation statements)
references
References 42 publications
1
56
0
10
Order By: Relevance
“…11 These findings alone are enough to rule out accounts that posit a regular rule (or regular schema), but what should we put in its place? There is no shortage of options: the literature boasts three broad categories of exemplar model, at least 13 connectionist (neural network) models (see Kirov & Cotterell, 2018 for a summary) and the multiple-rules model of Albright and Hayes (2003). 12 Because exemplar models originate in the cognitive psychology literature on categorization (e.g.…”
Section: Morphologically Inflected Wordsmentioning
confidence: 99%
“…11 These findings alone are enough to rule out accounts that posit a regular rule (or regular schema), but what should we put in its place? There is no shortage of options: the literature boasts three broad categories of exemplar model, at least 13 connectionist (neural network) models (see Kirov & Cotterell, 2018 for a summary) and the multiple-rules model of Albright and Hayes (2003). 12 Because exemplar models originate in the cognitive psychology literature on categorization (e.g.…”
Section: Morphologically Inflected Wordsmentioning
confidence: 99%
“…The results above at least indicate that Japanese verbs can be computed online via some generalizations and those generalizations do depend on the direction of morphological inflection, contrary to the conclusion of previous "wug" tests that Japanese verbs are merely stored in the mental lexicon (de Chene, 1982;Vance, 1987Vance, , 1991Klafehn, 2003Klafehn, , 2013. However, although the MGL is "rule-based", the nature of those generalizations is still an open question to be addressed via the systematic comparison with contemporary analogybased models such as Recurrent Neural Networks (RNN: Kirov and Cotterell, 2018) and Naive Discriminative Learning (NDL: Baayen et al, 2011) couched in Word and Paradigm models of morphology (Stump, 2001;Blevins, 2006).…”
Section: The Past Tense Debatementioning
confidence: 99%
“…RNNs have thus long attracted researchers interested in language acquisition and processing. Their recent successes in large-scale tasks has rekindled this interest (e.g., Frank et al, 2013;Lau et al, 2017;Kirov and Cotterell, 2018;Mc-Coy et al, 2018;Pater, 2018).…”
Section: Introductionmentioning
confidence: 99%