A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an A(n)B(n) grammar draw conflicting conclusions (Bahlmann & Friederici, 2006; De Vries, Monaghan, Knecht, & Zwitserlood, 2008). We argue that 2 conditions crucially affect learning A(n)B(n) structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars.
Recent research has put forward the idea that Chinese speech production is governed by the syllable as the fundamental phonological unit. However, it may be that onset priming might be more difficult to obtain in Mandarin Chinese. Therefore, in this study, the degree of overlap between prime and target was increased from C to CV (i.e., extending beyond the phoneme) as well as whether primes and targets had an overlapping structure (CV vs. CVN). Subsyllabic priming effects were found (i.e., onset + vowel overlap but not purely onset overlap), contrasting with the claim that the syllable is the compulsory building block in the initial construction of Mandarin Chinese phonology.
It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of
starting small
and
less is more
(Elman,
1993
; Newport,
1990
). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two types of simple recursive grammars: right‐branching and center‐embedding, with recursive embedded clauses in fixed positions and fixed length. This effect was replicated in Experiment 2 (
N =
100). In Experiment 3 and 4, we used a more complex center‐embedded grammar with recursive loops in variable positions, producing strings of variable length. When participants were presented an incremental ordering of training stimuli, as in natural language, they were better able to generalize their knowledge of simple units to more complex units when the training input “grew” according to structural complexity, compared to when it “grew” according to string length. Overall, the results suggest that starting small confers an advantage for learning complex center‐embedded structures when the input is organized according to structural complexity.
Hierarchical centre-embedded structures pose a large difficulty for language learners due to their complexity. A recent artificial grammar learning study (Lai & Poletiek, 2011) demonstrated a starting-small (SS) effect, i.e., staged-input and sufficient exposure to 0-level-of-embedding exemplars were the critical conditions in learning AnBn structures. The current study aims to test: (1) a more sophisticated type of SS (a gradually rather than discretely growing input), and (2) the frequency distribution of the input. The results indicate that SS optimally works under other conditional cues, such as a skewed frequency distribution with simple stimuli being more numerous than complex ones
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.