2022
DOI: 10.31234/osf.io/ktnur
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to predict and predicting to learn: Before and beyond the syntactic bootstrapper

Abstract: Young children can exploit the syntactic context of a novel word to narrow down itsprobable meaning. This is syntactic bootstrapping. A learner that uses syntacticbootstrapping to foster lexical acquisition must first have identified the semanticinformation that a syntactic context provides. Based on the semantic seed hypothesis,children discover the semantic predictiveness of syntactic contexts by tracking thedistribution of familiar words. We propose that these learning mechanisms relate to a largercognitive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 108 publications
0
5
0
Order By: Relevance
“…(2021)). Central to modeling purposes, APC learning criterion aligns with the idea of predictive processing in human perception (e.g., Friston, 2010; Rao & Ballard, 1999; Babineau, Havron, Dautriche, de Carvalho, & Christophe, 2022) and a similar learning mechanism can thereby be assumed to be available to any mammalian learner. More specifically, APC learning is based on minimizing prediction error of future speech observations, given access to past and current speech stimulus.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…(2021)). Central to modeling purposes, APC learning criterion aligns with the idea of predictive processing in human perception (e.g., Friston, 2010; Rao & Ballard, 1999; Babineau, Havron, Dautriche, de Carvalho, & Christophe, 2022) and a similar learning mechanism can thereby be assumed to be available to any mammalian learner. More specifically, APC learning is based on minimizing prediction error of future speech observations, given access to past and current speech stimulus.…”
Section: Methodsmentioning
confidence: 99%
“…The latent representations APC learns have been shown to be applicable to a range of speech tasks, including phone classification (Chung et al, 2019) and subword modeling (Feng, Żelasko, Moro-Velázquez, & Scharenborg, 2021; see also Yang et al (2021)). Central to modeling purposes, APC learning criterion aligns with the idea of predictive processing in human perception (e.g., Friston, 2010;Rao & Ballard, 1999;Babineau, Havron, Dautriche, de Carvalho, & Christophe, 2022) and a similar learning mechanism can thereby be assumed to be available to any mammalian learner. More specifically, APC learning is based on minimizing prediction error of future speech observations, given access to past and current speech stimulus.…”
Section: Methodsmentioning
confidence: 99%
“…However, more recent theories on language acquisition do highlight synergies when learning form and meaning (e.g., Abend et al, 2017;Babineau et al, 2022;Christophe et al, 2008;Dupoux, 2018;Feldman et al, 2013;Fourtassi et al, 2020;Landau & Gleitman, 1985;Räsänen & Rasilo, 2015), as well as when leveraging information about how language is used in context to learn various linguistic structures (Bohn & Frank, 2019;E. V. Clark, 2016E.…”
Section: Implicit Feedbackmentioning
confidence: 99%
“…Such error-driven learning mechanisms have been proposed to play a major role in human learning more generally (A. Clark, 2015;Friston, 2009) and are increasingly being applied to language acquisition (Babineau et al, 2022;Cox et al, 2020).…”
Section: Communicative Feedback 11mentioning
confidence: 99%
“…However, more recent theories on language acquisition do highlight synergies when learning form and meaning (e.g., Abend et al, 2017;Babineau et al, 2022;Christophe et al, 2008;Dupoux, 2018;Feldman et al, 2013;Fourtassi et al, 2020;Landau & Gleitman, 1985;Räsänen & Rasilo, 2015), as well as when leveraging information about how language is used in context to learn various linguistic structures (Bohn & Frank, 2019;E. V. Clark, 2016E.…”
Section: Implicit Feedbackmentioning
confidence: 99%