Humans and other animals demonstrate a remarkable ability to generalize knowledge across distinct contexts and objects during natural behavior. We posit that this ability depends on the geometry of the neural population representations of these objects and contexts. Specifically, abstract, or disentangled, neural representations -- in which neural population activity is a linear function of the variables important for making a decision -- are known to allow for this kind of generalization. Further, recent neurophysiological studies have shown that the brain has sufficiently abstract representations of some sensory and cognitive variables to enable generalization across distinct contexts. However, it is unknown how these abstract representations emerge. Here, using feedforward neural networks, we demonstrate a simple mechanism by which these abstract representations can be produced: The learning of multiple distinct classification tasks. We demonstrate that, despite heterogeneity in the task structure, abstract representations that enable reliable generalization can be produced from a variety of different inputs -- including standard nonlinearly mixed inputs, inputs that mimic putative representations from early sensory areas, and even simple image inputs from a standard machine learning data set. Thus, we conclude that abstract representations of sensory and cognitive variables emerge from the multiple behaviors that animals exhibit in the natural world, and may be pervasive in high-level brain regions. We make several specific predictions about which variables will be represented abstractly as well as show how these representations can be detected.