Theoretical neuroscientists and machine learning researchers have proposed a variety of learning rules for linear neuron models to enable artificial neural networks to accomplish supervised and unsupervised learning tasks. It has not been clear, however, how these theoretically-derived rules relate to biological mechanisms of plasticity that exist in the brain, or how the brain might mechanistically implement different learning rules in different contexts and brain regions. Here, we show that the calcium control hypothesis, which relates plastic synaptic changes in the brain to calcium concentration [Ca2+] in dendritic spines, can reproduce a wide variety of learning rules, including some novel rules. We propose a simple, perceptron-like neuron model that has four sources of [Ca2+]: local (following the activation of an excitatory synapse and confined to that synapse), heterosynaptic (due to activity of adjacent synapses), postsynaptic spike-dependent, and supervisor-dependent. By specifying the plasticity thresholds and amount of calcium derived from each source, it is possible to implement Hebbian and anti-Hebbian rules, one-shot learning, perceptron learning, as well as a variety of novel learning rules.