How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory–inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.
Highlights d Synaptic vesicle supply needs to be adapted to synaptic activity d Presynapses that recycle more SVs contain more newly synthesized proteins d Colchicine disrupts the correlation between synaptic activity and protein turnover d Chronic stimulation or depression also abolish this correlation
Top-down feedback in cortex is critical for guiding sensory processing, which has prominently been formalized in the theory of hierarchical predictive coding (hPC). However, experimental evidence for error units, which are central to the theory, is inconclusive, and it remains unclear how hPC can be implemented with spiking neurons. To address this, we connect hPC to existing work on efficient coding in balanced networks with lateral inhibition, and predictive computation at apical dendrites. Together, this work points to an efficient implementation of hPC with spiking neurons, where prediction errors are computed not in separate units, but locally in dendritic compartments. The implied model shows a remarkable correspondence to experimentally observed cortical connectivity patterns, plasticity and dynamics, and at the same time can explain hallmarks of predictive processing, such as mismatch responses, in cortex. We thus propose dendritic predictive coding as one of the main organizational principles of cortex.
How are visuomotor mismatch responses in primary visual cortex embedded into cortical processing? We here show that mismatch responses can be understood as the result of a cooperation of motor and visual areas to jointly explain optic flow. This cooperation requires that optic flow is not explained redundantly by both areas, meaning that optic flow inputs to V1 that are predictable from motor neurons should be canceled (i.e., explained away). As a result, neurons in V1 represent only external causes of optic flow, which could allow the animal to easily detect movements that are independent of its own locomotion. We implement the proposed model in a spiking neural network, where coding errors are computed in dendrites and synaptic weights are learned with voltage-dependent plasticity rules. We find that both positive and negative mismatch responses arise, providing an alternative to the prevailing idea that visuomotor mismatch responses are linked to dedicated neurons for error computation. These results also provide a new perspective on several other recent observations of cross-modal neural interactions in cortex.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.