2018
DOI: 10.1371/journal.pone.0191527
|View full text |Cite
|
Sign up to set email alerts
|

full-FORCE: A target-based method for training recurrent networks

Abstract: Trained recurrent networks are powerful tools for modeling dynamic neural computations. We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations. The method introduces a second network during training to provide suitable “target” dynamics useful for performing the task. Because it exploits the full recurrent connectivity, the method produces networks that perform tasks with fewer neu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
162
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 130 publications
(164 citation statements)
references
References 31 publications
2
162
0
Order By: Relevance
“…It is also worth noting that other mechanisms for dimensionality expansion exist for recurrent networks and would be interesting to explore in future works. Examples include training the network with an explicit dimensionality term in the loss function or implicitly via a highly variable "target" network as in [26,12]. Indeed, we have used only a single, basic type of "vanilla RNN" network model, and extensions toward more complex models is important if we are to generalize our findings to other machine learning settings, and to make more confident predictions about the brain's circuits.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…It is also worth noting that other mechanisms for dimensionality expansion exist for recurrent networks and would be interesting to explore in future works. Examples include training the network with an explicit dimensionality term in the loss function or implicitly via a highly variable "target" network as in [26,12]. Indeed, we have used only a single, basic type of "vanilla RNN" network model, and extensions toward more complex models is important if we are to generalize our findings to other machine learning settings, and to make more confident predictions about the brain's circuits.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…To test the computational implications of trajectory divergence, we trained recurrent neural networks 630 with an atypical approach. Rather than training networks to produce an output, we trained them to 631 autonomously follow a target internal trajectory 38,51 . We then asked whether networks were able to 632 follow those trajectories from beginning to end, without the benefit of any inputs indicating when to…”
Section: Trajectory-constrained Neural Networkmentioning
confidence: 99%
“…Moreover, the trajectories are chaotic, and so inherently unstable and not robust to noise. Although, recent theoretical work 10,14 has demonstrated ways of making them robust.…”
Section: Introductionmentioning
confidence: 99%