The ability to make inferences using abstract rules and relations has long been understood to be a hallmark of human intelligence, as evidenced in logic, mathematics, and language. Intriguingly, modern work in animal cognition has established that this ability is evolutionarily widespread, indicating an ancient and possibly foundational role in natural intelligence. Despite this importance, it remains an open question how inference using abstract rules is implemented in the brain - possibly due to a lack of competing hypotheses at the level of collective neural activity and of behavior. Here we report the generation and analysis of a collection of neural networks (NNs) that perform transitive inference (TI), a classical cognitive task that requires inference of a single abstract relation between novel combinations of inputs (if A > B and B > C, then A > C). We found that NNs generated using standard training methods (i) generalize fully (i.e. to all novel combinations of inputs), (ii) generalize when inference requires working memory (WM), a capacity thought to be essential for inference in living subjects, (iii) express multiple emergent behaviors long documented in humans and animals, in addition to novel behaviors not previously studied, and (iv) adopt different solutions that yield alternative predictions for both behavior and collective neural activity. Further, a subset of NNs expressed a "subtractive" solution that was characterized in neural activity space by a simple dynamical pattern (an oscillation) and geometric arrangement (ordered collinearity). Together, these findings show how collective neural activity can accomplish generalization according to an abstract rule, and provide a series of testable hypotheses not previously established in the study of TI. More broadly, these findings suggest new ways to understand how neural systems realize abstract rules and relations.