Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems.
We present a computational neural network model of recognition memory based on the biological structures of the hippocampus and medial temporal lobe cortex (MTLC), which perform complementary learning functions. The hippocampal component of the model contributes to recognition by recalling specific studied details. MTLC can not support recall, but it is possible to extract a scalar familiarity signal from MTLC that tracks how well the test item matches studied items. We present simulations that establish key qualitative differences in the operating characteristics of the hippocampal recall and MTLC familiarity signals, and we identify several manipulations (e.g., target-lure similarity, interference) that differentially affect the two signals. We also use the model to address the stochastic relationship between recall and familiarity (i.e., are they independent), and the effects of partial vs. complete hippocampal lesions on recognition.
Abstract:The prefrontal cortex has long been thought to subserve both working memory (the holding of information online for processing) and "executive" functions (deciding how to manipulate working memory and perform processing). Although many computational models of working memory have been developed, the mechanistic basis of executive function remains elusive, often amounting to a homunculus. This paper presents an attempt to deconstruct this homunculus through powerful learning mechanisms that allow a computational model of the prefrontal cortex to control both itself and other brain areas in a strategic, task-appropriate manner. These learning mechanisms are based on subcortical structures in the midbrain, basal ganglia and amygdala, which together form an actor/critic architecture. The critic system learns which prefrontal representations are task-relevant and trains the actor, which in turn provides a dynamic gating mechanism for controlling working memory updating. Computationally, the learning mechanism is designed to simultaneously solve the temporal and structural credit assignment problems. The model's performance compares favorably with standard backpropagation-based temporal learning mechanisms on the challenging 1-2-AX working memory task, and other benchmark working memory tasks.
The hippocampus and related structures are thought to be capable of 1) representing cortical activity in a way that minimizes overlap of the representations assigned to different cortical patterns (pattern separation); and 2) modifying synaptic connections so that these representations can later be reinstated from partial or noisy versions of the cortical activity pattern that was present at the time of storage (pattern completion). We point out that there is a trade-off between pattern separation and completion and propose that the unique anatomical and physiological properties of the hippocampus might serve to minimize this trade-off. We use analytical methods to determine quantitative estimates of both separation and completion for specified parameterized models of the hippocampus. These estimates are then used to evaluate the role of various properties and of the hippocampus, such as the activity levels seen in different hippocampal regions, synaptic potentiation and depression, the multi-layer connectivity of the system, and the relatively focused and strong mossy fiber projections. This analysis is focused on the feedforward pathways from the entorhinal cortex (EC) to the dentate gyrus (DG) and region CA3. Among our results are the following: 1) Hebbian synaptic modification (LTP) facilitates completion but reduces separation, unless the strengths of synapses from inactive presynaptic units to active postsynaptic units are reduced (LTD). 2) Multiple layers, as in EC to DG to CA3, allow the compounding of pattern separation, but not pattern completion. 3) The variance of the input signal carried by the mossy fibers is important for separation, not the raw strength, which may explain why the mossy fiber inputs are few and relatively strong, rather than many and relatively weak like the other hippocampal pathways. 4) The EC projects to CA3 both directly and indirectly via the DG, which suggests that the two-stage pathway may dominate during pattern separation and the one-stage pathway may dominate during completion; methods the hippocampus may use to enhance this effect are discussed.
The authors present a theoretical framework for understanding the roles of the hippocampus and neocortex in learning and memory. This framework incorporates a theme found in many theories of hippocampal function: that the hippocampus is responsible for developing conjunctive representations binding together stimulus elements into a unitary representation that can later be recalled from partial input cues. This idea is contradicted by the fact that hippocampally lesioned rats can learn nonlinear discrimination problems that require conjunctive representations. The authors' framework accommodates this finding by establishing a principled division of labor, where the cortex is responsible for slow learning that integrates over multiple experiences to extract generalities whereas the hippocampus performs rapid learning of the arbitrary contents of individual experiences. This framework suggests that tasks involving rapid, incidental conjunctive learning are better tests of hippocampal function. The authors implement this framework in a computational neural network model and show that it can account for a wide range of data in animal learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.