2017
DOI: 10.1371/journal.pone.0173684
|View full text |Cite
|
Sign up to set email alerts
|

Iterative free-energy optimization for recurrent neural networks (INFERNO)

Abstract: The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes’ synchrony as an optimization problem of the neurons sub-threshold activity for the generatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 12 publications
(35 citation statements)
references
References 95 publications
(138 reference statements)
1
34
0
Order By: Relevance
“…We showed in [1] that this variational process is similar to a stochastic descent gradient algorithm performed iteratively. We here add a more sophisticated gradient descent algorithm corresponding to a simulated annealing mechanism in order to account for the neuromodulators involved in decision-making in the PFC for uncertainty and surprise [16,95].…”
Section: The Network Architecture Inferno Gatementioning
confidence: 99%
See 3 more Smart Citations
“…We showed in [1] that this variational process is similar to a stochastic descent gradient algorithm performed iteratively. We here add a more sophisticated gradient descent algorithm corresponding to a simulated annealing mechanism in order to account for the neuromodulators involved in decision-making in the PFC for uncertainty and surprise [16,95].…”
Section: The Network Architecture Inferno Gatementioning
confidence: 99%
“…In this paper, we propose to use the neural architecture Inferno, standing for Iterative Free-Energy Optimization in Recurrent Neural Network, for the learning of temporal patterns and the serial recall of sequences [1,2]. We originally proposed this neuronal architecture to model the cortico-basal ganglia loop [1] for retrieving motor and audio primitives using Spike Timing-dependent Plasticity (STDP) within the framework of predictive coding and free-energy minimization [3,4,5]. Here, we propose to implement a similar free-energy minimization network but this time in the prefrontal-basal ganglia loop for the serial recall of memory sequences and for the learning of temporal pattern primitives, using gain-modulation instead of STDP.…”
Section: Proposal Framework For Sequence Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…The minimization of the free energy means to predict for one particular policy its expected state and to optimize it over time in order to minimize future errors [4]. Our neural model is based on this principle of Iterative Free-Energy Optimization for Recurrent Neural Networks, and we named it INFERNO [9], see Fig. 1.…”
Section: Introductionmentioning
confidence: 99%