2020
DOI: 10.1101/2020.02.21.959163
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural manifold under plasticity in a goal driven learning behaviour

Abstract: Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechan… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 50 publications
0
13
0
Order By: Relevance
“…For example, spike-frequency adaptation was shown to expand the memory exhibited by recurrent circuits (43) and different forms of biologically plausible synaptic learning rules have been employed to enhance computational performance of recurrent networks in an unsupervised fashion (44)(45)(46)(47). Furthermore, several studies made direct attempts to link learning in recurrent networks to optimization of state space dynamics (48,49) or metalearning (50). While neither of these recurrent models tried to explain how persistent responses after brief stimulation can interact with learning, they provide valuable insights into the various means by which refinements in internal network dynamics result in improved output performance.…”
Section: Discussionmentioning
confidence: 99%
“…For example, spike-frequency adaptation was shown to expand the memory exhibited by recurrent circuits (43) and different forms of biologically plausible synaptic learning rules have been employed to enhance computational performance of recurrent networks in an unsupervised fashion (44)(45)(46)(47). Furthermore, several studies made direct attempts to link learning in recurrent networks to optimization of state space dynamics (48,49) or metalearning (50). While neither of these recurrent models tried to explain how persistent responses after brief stimulation can interact with learning, they provide valuable insights into the various means by which refinements in internal network dynamics result in improved output performance.…”
Section: Discussionmentioning
confidence: 99%
“…Homeostatic preservation of predictive models may allow the brain to benefit from large networks during learning [138–140], and optimize these networks without extensive re-training. The processes we examine here may also be similar to those that allow transfer of learned motor skills despite gradual change in the readout of a brain-machine interface [141143].…”
Section: Discussionmentioning
confidence: 99%
“…One factor not examined here in the mean-rate model is the dimensionality of neural modes within the recurrent network formed by sender neurons. While this consideration has been the subject of extensive theoretical work [26,[41][42][43][44], the focus of the current work was the feedforward propagation of neural modes, and not their origin within recurrent circuits. Further work that examines both aspects of communication in a unified framework would provide an increased understanding of how interactions both within and across brain areas give rise to sensory perception and motor planning.…”
Section: Future Work and Conclusionmentioning
confidence: 99%
“…Other models are hard-wired to perform feedforward gating of neural activity [25] but offer no systematic way to control the transmission of null and potent modes. Finally, some models learn to generate low-dimensional representations of a signal [26,27] but focus on activity within a single brain area.…”
Section: Introductionmentioning
confidence: 99%