| Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative and hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning, to implementation of inhibition and control, along with neuroanatomical properties including area structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, based on these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling. PULVERMÜLLER ET AL., BIOLOGICAL CONSTRAINTS ON NEURAL NETWORK MODELS OF COGNITIVE FUNCTIONSAn important step towards addressing the neural substrate was taken by so-called localist models of cognition and language [8][9][10][11][12] , which filled the boxes of modular models with single artificial 'neurons' thought to locally represent cognitive elements 13 such as perceptual features and percepts, phonemes, word forms, meaning features, concepts and so on (Fig. 1a). The 1:1 relationship between the artificial neuron-like computational-algorithmic implementations and the entities postulated by cognitive theories made it easy to connect the two types of models. However, the notion that individual neurons each carry major cognitive functions is controversial today and difficult to reconcile with evidence from neuroscience research 14,15 . This is not to dispute the great specificity of some neurons' responses 16 , but rather to highlight the now dominant view that even these very specific cells "do not act in isolation but are part of cell assemblies representing familiar concepts", objects or other entities 17,18 . A further limitation of the localist models was that they did not systematically address the mechanisms underlying the formation of new representations and their connections.Auto-associative networks. Neuroanatomical observations suggest that the cortex is characterized by ample intrinsic and recurrent connectivity between its neurons and, therefore, it can be seen as an associative memory 19,20 . This position inspired a family of artificial neural networks, called 'auto-associative networks' or 'attractor networks' [21][22][23][24][25][26][27][28][29][30][31][32] .Auto-associative network models implement neurons with connections betwe...
A neurobiologically constrained deep neural network mimicking cortical areas relevant for sensorimotor, linguistic and conceptual processing was used to investigate the putative biological mechanisms underlying conceptual category formation and semantic feature extraction. Networks were trained to learn neural patterns representing specific objects and actions relevant to semantically ‘ground’ concrete and abstract concepts. Grounding sets consisted of three grounding patterns with neurons representing specific perceptual or action-related features; neurons were either unique to one pattern or shared between patterns of the same set. Concrete categories were modelled as pattern triplets overlapping in their ‘shared neurons’, thus implementing semantic feature sharing of all instances of a category. In contrast, abstract concepts had partially shared feature neurons common to only pairs of category instances, thus, exhibiting family resemblance, but lacking full feature overlap. Stimulation with concrete and abstract conceptual patterns and biologically realistic unsupervised learning caused formation of strongly connected cell assemblies (CAs) specific to individual grounding patterns, whose neurons were spread out across all areas of the deep network. After learning, the shared neurons of the instances of concrete concepts were more prominent in central areas when compared with peripheral sensorimotor ones, whereas for abstract concepts the converse pattern of results was observed, with central areas exhibiting relatively fewer neurons shared between pairs of category members. We interpret these results in light of the current knowledge about the relative difficulty children show when learning abstract words. Implications for future neurocomputational modelling experiments as well as neurobiological theories of semantic representation are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.