2023
DOI: 10.1093/cercor/bhad007
|View full text |Cite
|
Sign up to set email alerts
|

Brain-constrained neural modeling explains fast mapping of words to meaning

Abstract: Although teaching animals a few meaningful signs is usually time-consuming, children acquire words easily after only a few exposures, a phenomenon termed “fast-mapping.” Meanwhile, most neural network learning algorithms fail to achieve reliable information storage quickly, raising the question of whether a mechanistic explanation of fast-mapping is possible. Here, we applied brain-constrained neural models mimicking fronto-temporal-occipital regions to simulate key features of semantic associative learning. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 150 publications
1
8
0
Order By: Relevance
“…Following earlier modeling work (Constant et al, 2023;Henningsen-Schomers et al, 2023;Henningsen-Schomers & Pulvermüller, 2022;R. Tomasello et al, 2018R.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following earlier modeling work (Constant et al, 2023;Henningsen-Schomers et al, 2023;Henningsen-Schomers & Pulvermüller, 2022;R. Tomasello et al, 2018R.…”
Section: Methodsmentioning
confidence: 99%
“…Following earlier modeling work (Constant et al., 2023; Henningsen‐Schomers et al., 2023; Henningsen‐Schomers & Pulvermüller, 2022; R. Tomasello et al., 2018, 2019), we modeled neuronal learning and brain activity using brain‐constrained deep neural network models (see Pulvermüller et al., 2014, 2021; see Figure 2, Panels A and B, and Appendix S1 in the Supporting Information online for details). The model was implemented on the neural network simulation platform Felix (Wennekers, 2009).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, this framework has inspired research on the intersubjective "we" representations that facilitate coordinated multiagent behavior (Sebanz & Knoblich, 2021). Furthermore, and of greatest relevance here, recent neurocomputational investigations based directly on the GCM incorporate some-certainly not all, but some-of the social aspects of learning language-specific concepts (Constant, Pulvermüller, & Tomasello, 2023). Even apart from these considerations, however, the fact that language-specific concepts are public rather than private and are acquired through social interaction rather than personal bodily experience does not really bear on my argument that the GCM entails linguistic relativity, because that argument hinges on the content of languagespecific concepts, not their origin.…”
Section: Language-specific Versus Language-independent Conceptsmentioning
confidence: 99%
“…Furthermore, it is needless to say that there is a wide range of different semantic word types and learning scenarios. The modelling of 'pure' object and action words grounded directly into sensorimotor system is one way to address such differences (for other word types see [25,44,112]). The differences in the simulated semantic grounding process, either objects or actions t, led to qualitative differences of the related CAs, which extended more strongly either into the primary visual or the primary articulatory-motor cortex, respectively.…”
Section: Ca Topographies and Meaningful Symbol Storagementioning
confidence: 99%