Viewers are highly accurate at recognizing sex and race from faces -though it remains unclear how this is achieved. Recognition of familiar faces is also highly accurate across a very large range of viewing conditions, despite the difficulty of the problem. Here we show that computation of sex and race can emerge incidentally from a system designed to compute identity.We emphasise the role of multiple encounters with a small number of people, which we take to underlie human face learning. We use highly variable everyday 'ambient' images of a few people to train a Linear Discriminant Analysis (LDA) model on identity. The resulting model has human-like properties, including a facility to cohere previously unseen ambient images of familiar (trained) people -an ability which breaks down for the faces of unknown (untrained) people. The first dimension created by the identity-trained LDA classifies both familiar and unfamiliar faces by sex, and the second dimension classifies faces by race -even though neither of these categories was explicitly coded at learning. By varying the numbers and types of face identities on which a further series of LDA models were trained, we show that this incidental learning of sex and race reflects covariation between these social categories and face identity, and that a remarkably small number of identities need be learnt before such incidental dimensions emerge. The task of learning to recognise familiar faces is sufficient to create certain salient social categories.