Synaptic plasticity is a central theme in neuroscience. A framework of three-factor learning rules provides a powerful abstraction, helping to navigate through the abundance of models of synaptic plasticity. It is well-known that the dopamine modulation of learning is related to reward, but theoretical models predict other functional roles of the modulatory third factor; it may encode errors for supervised learning, summary statistics of the population activity for unsupervised learning or attentional feedback. Specialized structures may be needed in order to generate and propagate third factors in the neural network.
Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle.
This letter considers a class of biologically plausible cost functions for neural networks, where the same cost function is minimized by both neural activity and plasticity. We show that such cost functions can be cast as a variational bound on model evidence under an implicit generative model. Using generative models based on partially observed Markov decision processes (POMDP), we show that neural activity and plasticity perform Bayesian inference and learning, respectively, by maximizing model evidence. Using mathematical and numerical analyses, we establish the formal equivalence between neural network cost functions and variational free energy under some prior beliefs about latent states that generate inputs. These prior beliefs are determined by particular constants (e.g., thresholds) that define the cost function. This means that the Bayes optimal encoding of latent or hidden states is achieved when the network's implicit priors match the process that generates its inputs. This equivalence is potentially important because it suggests that any hyperparameter of a neural network can itself be optimized—by minimization with respect to variational free energy. Furthermore, it enables one to characterize a neural network formally, in terms of its prior beliefs.
This paper considers the emergence of a generalised synchrony in ensembles of coupled self-organising systems, such as neurons. We start from the premise that any self-organising system complies with the free energy principle, in virtue of placing an upper bound on its entropy. Crucially, the free energy principle allows one to interpret biological systems as inferring the state of their environment or external milieu. An emergent property of this inference is synchronisation among an ensemble of systems that infer each other. Here, we investigate the implications of neuronal dynamics by simulating neuronal networks, where each neuron minimises its free energy. We cast the ensuing ensemble dynamics in terms of inference and show that cardinal behaviours of neuronal networks – both in vivo and in vitro – can be explained by this framework. In particular, we test the hypotheses that (i) generalised synchrony is an emergent property of free energy minimisation; thereby explaining synchronisation in the resting brain: (ii) desynchronisation is induced by exogenous input; thereby explaining event-related desynchronisation and (iii) structure learning emerges in response to causal structure in exogenous input; thereby explaining functional segregation in real neuronal systems.
This work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function—and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity—accompanied with adaptation of firing thresholds—is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.