2014
DOI: 10.1111/lang.12049
|View full text |Cite
|
Sign up to set email alerts
|

A Role for Chunk Formation in Statistical Learning of Second Language Syntax

Abstract: Humans are remarkably sensitive to the statistical structure of language. However, different mechanisms have been proposed to account for such statistical sensitivities. The present study compared adult learning of syntax and the ability of two models of statistical learning to simulate human performance: Simple Recurrent Networks, which learn by predictive computation, and PARSER, which learns chunks as a byproduct of general principles of associative learning and memory. In the first stage, a semiartificial … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
23
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(24 citation statements)
references
References 54 publications
(129 reference statements)
1
23
0
Order By: Relevance
“…1), enabling the researchers to argue that chance had been validated as a baseline for their particular experiments. However, in very similar studies, control participants have been found to deviate from chance overall (Rebuschat & Williams, 2012 , Exp. 2) or on specifi c items (Hamrick, 2014a ;Rebuschat & Williams, 2012 ;Rebuschat et al, 2015 ), making it untenable to consider the corresponding experimental groups' above-chance performance alone as evidence of target learning. Extrapolating out to the area of research as a whole, without a significant body of work establishing the precise circumstances under which participants will simply guess and perform at chance in the absence of training/learning, repeated fi ndings of nonchance performance in human controls underscore that it may be invalid, and therefore irresponsible, to employ chance as the sole baseline without direct experiment-specifi c evidence to support its use.…”
Section: Comparisons Against Chancementioning
confidence: 99%
See 3 more Smart Citations
“…1), enabling the researchers to argue that chance had been validated as a baseline for their particular experiments. However, in very similar studies, control participants have been found to deviate from chance overall (Rebuschat & Williams, 2012 , Exp. 2) or on specifi c items (Hamrick, 2014a ;Rebuschat & Williams, 2012 ;Rebuschat et al, 2015 ), making it untenable to consider the corresponding experimental groups' above-chance performance alone as evidence of target learning. Extrapolating out to the area of research as a whole, without a significant body of work establishing the precise circumstances under which participants will simply guess and perform at chance in the absence of training/learning, repeated fi ndings of nonchance performance in human controls underscore that it may be invalid, and therefore irresponsible, to employ chance as the sole baseline without direct experiment-specifi c evidence to support its use.…”
Section: Comparisons Against Chancementioning
confidence: 99%
“…Considering, moreover, that SLA researchers tend to collect data from rather homogeneous samples (Plonsky, 2015 ), and that homogeneous samples are more likely to have internal correlations manifesting as systematic group-level behavior (Field, 2009 ), it seems reasonable to expect participants to show at least some similar response tendencies independently of training/learning, thereby making chance an inappropriate baseline. Additional examples of this (from Hamrick, 2013Hamrick, , 2014aRebuschat et al, 2015 ) will be presented in the following text in the sections on untrained and trained controls.…”
Section: Comparisons Against Chancementioning
confidence: 99%
See 2 more Smart Citations
“…In the past, it has been common for researchers to fit a single type of computational model, say, a connectionist neural network, to human data using very specific parameters (e.g, Ellis & Schmidt 1997;Williams & Kuribara 2008). However, in Phillip Hamrick's recent research (Hamrick 2014), he has examined multiple competing computational models with different learning algorithms against adult L2 learning data. He tested these models across a range of parameters, essentially ensuring that the goodness of fit of any model is due to the intrinsic properties of that model, rather than a very limited set of possible parameters within the model.…”
Section: The Evolving Fieldmentioning
confidence: 99%