2020
DOI: 10.1111/cogs.12828
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Presentation Protects Working Memory From Catastrophic Interference

Abstract: Neural network models of memory are notorious for catastrophic interference: Old items are forgotten as new items are memorized (French, 1999;McCloskey & Cohen, 1989). While working memory (WM) in human adults shows severe capacity limitations, these capacity limitations do not reflect neural network style catastrophic interference. However, our ability to quickly apprehend the numerosity of small sets of objects (i.e., subitizing) does show catastrophic capacity limitations, and this subitizing capacity and W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 100 publications
(234 reference statements)
0
8
0
Order By: Relevance
“…Before presenting our results, it is useful to outline possible psychological interpretations of the forgetting parameter. Similar forgetting parameters are widely used in related models (e.g., Bays, Singh-Curry, Gorgoraptis, Driver, & Husain, 2010;Endress & Szabó, 2020;Gottlieb, 2007;Knops, Piazza, Sengupta, Eger, & Melcher, 2014;Roggeman, Fias, & Verguts, 2010), and seem plausible at least at the single neuron level (e.g., Whitmire & Stanley, 2016). Forgetting functions have also been proposed at the macroscopic, cognitive level (e.g., Wixted & Ebbesen, 1991;Rubin & Wenzel, 1996), though the specific forgetting functions are debated.…”
Section: Resultsmentioning
confidence: 94%
See 1 more Smart Citation
“…Before presenting our results, it is useful to outline possible psychological interpretations of the forgetting parameter. Similar forgetting parameters are widely used in related models (e.g., Bays, Singh-Curry, Gorgoraptis, Driver, & Husain, 2010;Endress & Szabó, 2020;Gottlieb, 2007;Knops, Piazza, Sengupta, Eger, & Melcher, 2014;Roggeman, Fias, & Verguts, 2010), and seem plausible at least at the single neuron level (e.g., Whitmire & Stanley, 2016). Forgetting functions have also been proposed at the macroscopic, cognitive level (e.g., Wixted & Ebbesen, 1991;Rubin & Wenzel, 1996), though the specific forgetting functions are debated.…”
Section: Resultsmentioning
confidence: 94%
“…While forgetting is time-based in our model, many authors argue that, psychologically speaking, there is no forgetting over time unless there are other stimuli that interfere with the memory items (e.g., Baddeley & Scott, 1971;Berman, Jonides, & Lewis, 2009;Nairne, Whiteman, & Kelley, 1999). Here, we do not attempt to decide between these possibilities; in fact, the model equations in Supplementary Material A make it plausible that our interference parameter might well mimic the role of forgetting (see Endress & Szabó, 2020). Our point simply is that the (time-based or interference-based) mechanisms that lead to forgetting are critical for learning to occur.…”
Section: Resultsmentioning
confidence: 98%
“…The network used here is a fairly generic saliency map (e.g., Bays et al., 2010; Endress & Szabó, 2020; Gottlieb, 2007; Roggeman et al., 2010; Sengupta et al., 2014) augmented by a Hebbian learning component. The network comprises units representing populations of neurons encoding syllables (or other items).…”
Section: The Current Studymentioning
confidence: 99%
“…Here, I provide computational support for this idea, and show that such electrophysiological results can be explained in a simple, memory-less Hebbian network. The network is a fairly generic saliency map (e.g., Bays, Singh-Curry, Gorgoraptis, Driver, & Husain, 2010;Endress & Szabó, 2020;Gottlieb, 2007;Roggeman, Fias, & Verguts, 2010;Sengupta, Surampudi, & Melcher, 2014) augmented by a Hebbian learning component. The network comprises units representing populations of neurons encoding syllables (or other items).…”
Section: The Current Studymentioning
confidence: 99%
“…While the time step is arbitrary in the absence of external input (see Endress & Szabó, 2020, for a proof), I use the duration of individual units (e.g., syllables, visual symbols etc.) as the time unit in the discretization as associative learning is generally invariant under temporal scaling of the experiment (e.g., Gallistel & Gibbon, 2000;Gallistel, Mark, King, & Latham, 2001).…”
Section: Supplementary Materials a Model Definitionmentioning
confidence: 99%