The perception of face gender was examined in the context of extending "face space" models of human face representations to include the perceptual categories defined by male and female faces. We collected data on the recognizability, gender classifiability (reaction time to classify a face as male/female), attractiveness, and masculinity/femininity of individual male and female faces. Factor analyses applied separately to the data for male and female faces yielded the following results. First, for both male and female faces, the recognizability and gender classifiability of faces were independent-a result inconsistent with the hypothesis that both recognizability and gender classifiability depend on a face's "distance" from the subcategory gender prototype. Instead, caricatured aspects of gender (femininity/masculinity ratings) related to the gender classifiability of the faces. Second, facial attractiveness related inversely to face recognizability for male, but not for female, faces-a result that resolves inconsistencies in previous studies. Third, attractiveness and femininity for female faces were nearly equivalent, but attractiveness and masculinity for male faces were not equivalent. Finally,we applied principal component analysis to the pixel-coded face images with the aim of extracting measures related to the gender classifiability and recognizability of individual faces. We incorporated these model-derived measures into the factor analysis with the human rating and performance measures. This combined analysis indicated that face recognizability is related to the distinctiveness of a face with respect to its gender subcategory prototype. Additionally, the gender classifiability offaces related to at least one caricatured aspect of face gender.Human faces provide us with a plethora of information that is valuable and necessary for social interaction. When we encounter a face, we can quickly and efficiently decide whether it is one we know. For faces of persons we know, we can often retrieve semantic and identity information about the person. Additionally, from both familiar and unfamiliar faces we can make judgments about the gender, approximate age, and race of the person. The information we use to accomplish these latter judgments has been referred to by Bruce and Young (1986) in their model offace processing as "visually derived semantic" informaThis work was supported in part by NIMH Grant MH5 I 765-02 to A.J.O. Thanks are due to June Chance and Al Goldstein for providing the faces used in the present experiments and simulations and to James