2015
DOI: 10.1075/sibil.48.10onn
|View full text |Cite
|
Sign up to set email alerts
|

Implicit learning of non-adjacent dependencies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 42 publications
0
7
0
Order By: Relevance
“…In their study, one group of participants (i.e., in the Learnable Condition) was give pre-exposure to non-adjacent dependencies presented in context where such learning is relatively easy (i.e., a high variation context). In this context, non-target elements (X) were highly variable, highlighting the relatively stable target structures (i.e., A_B dependencies); this thus constituted a supportive learning context (Goméz, 2002;Onnis et al, 2003;Onnis et al, 2004). While in two comparison groups, participants were briefly prior exposure to unreliable non-adjacencies in the high variation context (i.e., Non-Learnable Condition) or to the same words as the Learnable Condition but presented individually at random (i.e., Unstructured Condition).…”
Section: Facilitating Implicit Learning Of Multiple Non-adjacent Depe...mentioning
confidence: 99%
“…In their study, one group of participants (i.e., in the Learnable Condition) was give pre-exposure to non-adjacent dependencies presented in context where such learning is relatively easy (i.e., a high variation context). In this context, non-target elements (X) were highly variable, highlighting the relatively stable target structures (i.e., A_B dependencies); this thus constituted a supportive learning context (Goméz, 2002;Onnis et al, 2003;Onnis et al, 2004). While in two comparison groups, participants were briefly prior exposure to unreliable non-adjacencies in the high variation context (i.e., Non-Learnable Condition) or to the same words as the Learnable Condition but presented individually at random (i.e., Unstructured Condition).…”
Section: Facilitating Implicit Learning Of Multiple Non-adjacent Depe...mentioning
confidence: 99%
“…On task which rely mostly on statistical learning, such as the Artificial Grammar Learning paradigm, participants generally show relatively low (even if above-chance) accuracy. Furthermore, while statistical learning models generally deal well with complex regularities, such as non-adjacent regularities (Onnis et al, 2015) or complex print-to-speech correspondences (Seidenberg & McClelland, 1989), they cannot, by definition, provide the correct pronunciation for unpredictable items.…”
Section: Statistical Learning As the Process That Enables Readers To ...mentioning
confidence: 99%
“…repetition patterns) (Brooks & Vokey, 1991;Jamieson & Mewhort, 2009); or recursive rules (Dienes & Longuet-Higgins, 2004;Jiang et al, 2012). One way of uniting the different accounts of what is learned is through a computational model such as the Simple Recurrent Network (SRN; Elman, 1990) that can learn a range of such structures (as shown by e.g., Christiansen, Dale, & Reali, 2010;Onnis, Destrebecqz, Christiansen, Chater, & Cleeremans, 2015;Rodriguez, Wiles, & Elman, 1999) 1 . Leaning local associations or chunking involves minimal integration within or across stimuli; the way implicit learning can transcend chunking thus particularly sheds most light on the mechanisms by which it operates in storing and processing knowledge.…”
Section: Introductionmentioning
confidence: 99%