Maximization of information transmission by a spiking-neuron model predicts changes of synaptic connections that depend on timing of pre-and postsynaptic spikes and on the postsynaptic membrane potential. Under the assumption of Poisson firing statistics, the synaptic update rule exhibits all of the features of the Bienenstock-Cooper-Munro rule, in particular, regimes of synaptic potentiation and depression separated by a sliding threshold. Moreover, the learning rule is also applicable to the more realistic case of neuron models with refractoriness, and is sensitive to correlations between input spikes, even in the absence of presynaptic rate modulation. The learning rule is found by maximizing the mutual information between presynaptic and postsynaptic spike trains under the constraint that the postsynaptic firing rate stays close to some target firing rate. An interpretation of the synaptic update rule in terms of homeostatic synaptic processes and spiketiming-dependent plasticity is discussed.computational neuroscience ͉ information theory ͉ learning ͉ spiking-neuron model ͉ synaptic plasticity T he efficacy of synaptic connections between neurons in the brain is not fixed, but it varies, depending on the firing frequency of presynaptic neurons (1, 2), the membrane potential of the postsynaptic neuron (3), spike timing (4-6), and intracellular parameters such as the calcium concentration; for a review see ref. 7. During the last decades, a large number of theoretical concepts and mathematical models have emerged that have helped to understand the functional consequences of synaptic modifications, in particular, long-term potentiation (LTP) and long-term depression (LTD) during development, learning, and memory; for reviews see (8)(9)(10). Apart from the work of Hebb (11), one of the most influential theoretical concepts has been the Bienenstock-Cooper-Munro (BCM) model originally developed to account for cortical organization and receptive field properties during development (12). The model predicted (i) regimes of both LTD and LTP, depending on the state of the postsynaptic neuron, and (ii) a sliding threshold that separates the two regimes. Both predictions i and ii have subsequently been confirmed experimentally (2,13,14).In this paper, we construct a bridge between the BCM model and a seemingly unconnected line of research in theoretical neuroscience centered around the concept of optimality.There are indications that several components of neural systems show close to optimal performance (15-17). Instead of looking at a specific implementation of synaptic changes, defined by a rule such as in the BCM model, we therefore ask what would be the optimal synaptic update rule so as to guarantee that a spiking neuron transmits as much information as possible? Information theoretic concepts have been used by several researchers because they allow to compare performance of neural systems with a fundamental theoretical limit (16, 17), but optimal synaptic update rules have so far been mostly restricted to a pure r...