We present a new mathematical formulation of associative learning focused on non-human animals, which we call Alearning. Building on current animal learning theory and machine learning, A-learning is composed of two learning equations, one for stimulus-response values and one for stimulus values (conditioned reinforcement). A third equation implements decision-making by mapping stimulus-response values to response probabilities. We show that A-learning can reproduce the main features of: instrumental acquisition, including the effects of signaled and unsignaled non-contingent reinforcement; Pavlovian acquisition, including higher-order conditioning, omission training, autoshaping, and differences in form between conditioned and unconditioned responses; acquisition of avoidance responses; acquisition and extinction of instrumental chains and Pavlovian higher-order conditioning; Pavlovian-to-instrumental transfer; Pavlovian and instrumental outcome revaluation effects, including insight into why these effects vary greatly with training procedures and with the proximity of a response to the reinforcer. We discuss the differences between current theory and A-learning, such as its lack of stimulus-stimulus and response-stimulus associations, and compare A-learning with other temporal-difference models from machine learning, such as Q-learning, SARSA, and the actor-critic model. We conclude that A-learning may offer a more convenient view of associative learning than current mathematical models, and point out areas that need further development.