2017
DOI: 10.1111/desc.12629
|View full text |Cite
|
Sign up to set email alerts
|

Curiosity‐based learning in infants: a neurocomputational approach

Abstract: Infants are curious learners who drive their own cognitive development by imposing structure on their learning environment as they explore. Understanding the mechanisms by which infants structure their own learning is therefore critical to our understanding of development. Here we propose an explicit mechanism for intrinsically motivated information selection that maximizes learning. We first present a neurocomputational model of infant visual category learning, capturing existing empirical data on the role of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
65
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 63 publications
(75 citation statements)
references
References 69 publications
9
65
1
Order By: Relevance
“…Thus, when the object appears without the label there is a mismatch between representation and reality. This mismatch leads to an increase in network error for the previously labeled stimulus only, which has been interpreted in the literature as a model of longer looking times [3], [26], [28]- [30]. Further, these results delineate between the two possible explanations for infants' behavior in the empirical task; specifically, our results support accounts of early word learning in which labels are initially encoded as low-level, perceptual features and integrated into object representations.…”
Section: Discussionsupporting
confidence: 78%
See 2 more Smart Citations
“…Thus, when the object appears without the label there is a mismatch between representation and reality. This mismatch leads to an increase in network error for the previously labeled stimulus only, which has been interpreted in the literature as a model of longer looking times [3], [26], [28]- [30]. Further, these results delineate between the two possible explanations for infants' behavior in the empirical task; specifically, our results support accounts of early word learning in which labels are initially encoded as low-level, perceptual features and integrated into object representations.…”
Section: Discussionsupporting
confidence: 78%
“…We used a dual-memory three-layer auto-encoder model inspired by Westermann and Mareschal [3] to implement both the labels-as-features and the compound-representations theories. Such neurocomputational models have successfully captured looking time data from infant categorization tasks [3], [26]- [30]. Auto-encoders reproduce input patterns on their output layer by comparing input and output activation after presentation of training stimuli, then using this error to adjust the weights between units using back-propagation [31].…”
Section: A Model Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…The trade-off between information processing progress (indexed by frontal theta oscillatory amplitude modulation) and bias toward incoming stimulation (indexed by P1 peak amplitude modulation) highlighted by this research supports developmental theories portraying optimal learning as evidenced by a shift from exploitation of the resource at hand to exploration of incoming sensory input (e.g., Cohen et al, 2007;Mather, 2013;Twomey & Westermann, 2018). The specificity of our paradigm lies in its ability to characterize these interacting mechanisms at a neural level.…”
Section: F I G U R Esupporting
confidence: 65%
“…Here, we posit that motivation for communication may be behind individual differences in gaze following. Twomey and Westermann's (2018) infant computational modeling study suggested that infants are intrinsically motivated to select information that maximizes learning. In the context of learning in gaze following, it can be considered that interacting with others would maximize information to learn about the environment.…”
Section: Experiments 2 Results and Discussionmentioning
confidence: 99%