Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic ‘eligibility traces’. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
19 20 ¶ These authors contributed equally to this work. 21 22 23Current models of word-production in Broca's area posit that sequential and staggered semantic, 24 lexical, phonological and articulatory processes precede articulation. Using millisecond-resolution 25 intra-cranial recordings, we evaluated spatiotemporal dynamics and high frequency functional 26 interconnectivity between ventro-lateral prefrontal regions during single-word production. Through 27 the systematic variation of retrieval, selection, and phonological loads, we identified specific 28 activation profiles and functional coupling patterns between these regions that fit within current 29 psycholinguistic theories of word production. However, network interactions underpinning these 30 processes activate in parallel (not sequentially), while the processes themselves are indexed by 31 specific changes in network state. We found evidence that suggests that pars orbitalis is coupled 32 with pars triangularis during lexical retrieval, while lexical selection in Broca's area is terminated via 33 coupled activity with M1 at articulation onset. Taken together, this work reveals that speech 34 production relies on very specific inter-regional couplings in rapid sequence in the language 35 dominant hemisphere. 36 37 slow, and that semantico-lexical, phonological and articulatory processes occur sequentially, with 50 hand-off occurring from the end of one process, to enable the next [17, 18]. However, recent work, 51 suggests that very early (within 200 ms) after input, acoustic structure and semantics are already 52 being processed [13, 19, 20]. Additionally, functional imaging techniques have also been used to 53 ascribe this stepwise ontology of word production to distinct sub-regions of Broca's area, and can 54 parsimoniously be interpreted as implying that linguistic process reside in a distinct cortical 55 region [17, 18,[21][22][23]. 56 57 Over the last decade, invasive recordings of human cortex have led to novel insights into language 58 mechanisms [6,[24][25][26][27]. However, these studies have evaluated the spatial and temporal 59 characteristics of individual regions in isolation, with limited, if any, analyses of the network 60 behavior likely underpinning them [6]. Here, we seek to evaluate how linguistic processes are 61 effected by networks connecting sub-regions involved in these processes, and whether changes in 62 network state index the transition from one constituent process to the next. 63 64To derive this network-based description of dynamics within the IFG during word production, we 65 collected data in a large cohort (n=27) of patients with experimental conditions that varied retrieval, 66 selection, and phonological loads. We specifically evaluated interactions between sub-regions of 67 the IFG and motor cortex in the sub-central gyrus (sCG) during the intervals at which constituent 68 linguistic processes might occur. Intermediary states identified in the functional network connecting 69 the components of the IFG during object ...
Activity-dependent modifications of synaptic efficacies are a cellular substrate of learning and memory. Experimental evidence shows that these modifications are synapse specific and that the long-lasting effects are associated with the sustained increase in concentration of specific proteins like PKM ζ . However, such proteins are likely to diffuse away from their initial synaptic location and spread out to neighboring synapses, potentially compromising synapse specificity. In this article, we address the issue of synapse specificity during memory maintenance. Assuming that the long-term maintenance of synaptic plasticity is accomplished by a molecular switch, we carry out analytical calculations and perform simulations using the reaction-diffusion package in NEURON to determine the limits of synapse specificity during maintenance. Moreover, we explore the effects of the diffusion and degradation rates of proteins and of the geometrical characteristics of dendritic spines on synapse specificity. We conclude that the necessary conditions for synaptic specificity during maintenance require that molecular switches reside in dendritic spines. The requirement for synaptic specificity when the molecular switch resides in spines still imposes strong limits on the diffusion and turnover of rates of maintenance molecules, as well as on the morphologic properties of synaptic spines. These constraints are quite general and apply to most existing models suggested for maintenance. The parameter values can be experimentally evaluated, and if they do not fit the appropriate predicted range, the validity of this class of maintenance models would be challenged.
Dopamine (DA) releasing neurons in the midbrain learn response patterns that represent reward prediction error (RPE). Typically, models proposing a mechanistic explanation for how dopamine neurons learn to exhibit RPE are based on temporal difference (TD) learning, a machine learning algorithm. However, mechanistic models motivated by TD learning face two significant hurdles. First, TD-based models typically require rather unrealistic components, such as long and robust temporal chains of feature-specific neurons that tile, a priori, each interval from a given stimulus to a given reward. Secondly, various predictions of TD clash with experimental observations of how RPE evolves over learning. Here, we present a biophysically plausible plastic network model of spiking neurons, that learns RPEs and can replicate results observed in multiple experiments. This model, coined FLEX (Flexibly Learned Errors in Expected Reward), learns feature-specific representations of time, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. Following learning, model dopamine neurons in FLEX report a distribution of response types, as observed experimentally and as used in machine learning. Dopamine neuron firing in our model reflects an RPE before and after learning but not necessarily during learning, allowing the model to reconcile seemingly inconsistent experiments and make unique predictions that contrast those of TD.
Recurrent neural networks of spiking neurons can exhibit long lasting and even persistent activity. Such networks are often not robust and exhibit spike and firing rate statistics that are inconsistent with experimental observations. In order to overcome this problem most previous models had to assume that recurrent connections are dominated by slower NMDA type excitatory receptors. Usually, the single neurons within these networks are very simple leaky integrate and fire neurons or other low dimensional model neurons. However real neurons are much more complex, and exhibit a plethora of active conductances which are recruited both at the sub and supra threshold regimes. Here we show that by including a small number of additional active conductances we can produce recurrent networks that are both more robust and exhibit firing-rate statistics that are more consistent with experimental results. We show that this holds both for bi-stable recurrent networks, which are thought to underlie working memory and for slowly decaying networks which might underlie the estimation of interval timing. We also show that by including these conductances, such networks can be trained to using a simple learning rule to predict temporal intervals that are an order of magnitude larger than those that can be trained in networks of leaky integrate and fire neurons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.