Biological organisms that sequentially experience multiple environments develop selforganized representations of the stimuli unique to each; moreover, these representations are retained long-term, and sometimes utilize overlapping sets of neurons. This functionality is difficult to replicate in silico for several reasons, such as the tradeoff between stability, which enables retention, and plasticity, which enables ongoing learning. Here, by using a network that leverages an ensemble of neuromimetic mechanisms, I successfully simulate multi-environment learning; additionally, from measurements of synapse states and stimulus recognition performance taken at multiple time points, the following network features emerge as particularly important to its operation. First, while reinforcement-driven stabilization preserves the synapses most important to the representation of each stimulus, pruning eliminates many of the rest, thereby resulting in low-noise representations. Second, in familiar environments, a low baseline rate of exploratory synapse generation balances with pruning to confer plasticity without introducing significant noise; meanwhile, in novel environments, new synapses are reinforced, reinforcement-driven spine generation promotes further exploration, and learning is hastened. Thus, reinforcement-driven spine generation allows the network to temporally separate its pursuit of pruning and plasticity objectives. Third, the permanent synapses interfere with the learning of new environments; but, stimulus competition and long-term depression mitigate this effect; and, even when weakened, the permanent synapses enable the rapid relearning of the representations to which they correspond. This exhibition of memory suppression and rapid recovery is notable because of its biological analogs, and because this biologically-viable strategy for reducing interference would not be favored by artificial objective functions unaccommodating of brief performance lapses. Together, these modeling results advance understanding of intelligent systems by demonstrating the emergence of system-level operations and naturalistic learning outcomes from component-level features, and by showcasing strategies for finessing system design tradeoffs.