2018
DOI: 10.1101/504936
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Low dimensional dynamics for working memory and time encoding

Abstract: Our decisions often depend on multiple sensory experiences separated by time delays. The brain can remember these experiences and, simultaneously, estimate the timing between events. To understand the mechanisms underlying working memory and time encoding we analyze neural activity recorded during delays in four experiments on non-human primates. To disambiguate potential mechanisms, we propose two analyses, namely, decoding the passage of time from neural data, and computing the cumulative dimensionality of t… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

8
79
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 49 publications
(87 citation statements)
references
References 48 publications
8
79
0
Order By: Relevance
“…We approached this question through decoding, as the presence of sequential dynamics such as “time cells” [MacDonald et al, 2011] should allow us to decode the passage of time from the neural data [Bakhurin et al, 2017, Robinson et al, 2017, Cueva et al, 2019]. We used an ensemble of linear classifiers trained to discriminate the population activity between every pair of time points [Bakhurin et al, 2017, Cueva et al, 2019] in the tone and trace periods of the trial (0-35 sec, 2.5 sec bins). To illustrate the idea behind this analysis, we can summarize the activity of the network at each point in time as a point in a high dimensional neural state space, where the axes in this space corresponds to the activity rate of each neuron (schematized in Fig.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We approached this question through decoding, as the presence of sequential dynamics such as “time cells” [MacDonald et al, 2011] should allow us to decode the passage of time from the neural data [Bakhurin et al, 2017, Robinson et al, 2017, Cueva et al, 2019]. We used an ensemble of linear classifiers trained to discriminate the population activity between every pair of time points [Bakhurin et al, 2017, Cueva et al, 2019] in the tone and trace periods of the trial (0-35 sec, 2.5 sec bins). To illustrate the idea behind this analysis, we can summarize the activity of the network at each point in time as a point in a high dimensional neural state space, where the axes in this space corresponds to the activity rate of each neuron (schematized in Fig.…”
Section: Resultsmentioning
confidence: 99%
“…By extending this analysis to compare all possible pairs of time points (i.e. for all possible Δ t s), we can identify moments during the task that exhibit reliable temporal dynamics across trials [Bakhurin et al, 2017, Cueva et al, 2019]. We note however that the ability to decode time is not an exclusive feature of neural sequences, but a signature of any consistent dynamical trajectory where the neural states become sufficiently decorrelated in time (e.g.…”
Section: Resultsmentioning
confidence: 99%
“…While these findings are interesting, our study has limitations in its scope and analysis. Foremost is that we consider only a single, extremely simple task (see [10] for important progress in understanding the dimensionality of RNNs trained on a more complicated task). A clear need in future work is also to consider a wider range of task and model parameters: for example, input dimension between 2 and N , and to consider higher-dimensional outputs specified by more than two class labels.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Despite this high degree of variability in neural responses, repeatable and reliable activity structure is often unveiled by dimensionality reduction procedures [11,42]. Rather than being set by, say, the number of neurons engaged in the circuit, the effective dimensionality of the activity (often called neural "representation") seems to be intimately linked to the complexity of the function, or behavior, that the neural circuit fulfills or produces [16,44,49,10]. These findings appear to show some universality: similar task-dependent dimensional representations can manifest in artificial networks used in machine learning systems trained using optimization algorithms (e.g., [38,10,53]).…”
Section: Introductionmentioning
confidence: 99%
“…In this case, a target population could use the same method to reliably read out an unchanging representation whenever it is required. Conversely, representing relevant information in a time sensitive manner is also critical, since most behaviors are organized in time, whether they are internally generated or aligned to external events [Meyers2018, Cueva et al2019]. For example, our task includes a recurring sequence of events (fixation point, reward predicting cue, delay, joystick instruction cue, etc…), which the subjects have learned.…”
Section: Discussionmentioning
confidence: 99%