Abstract. This paper evaluates the discriminant capabilities of face parts in gender recognition. Given the image of a face, a number of subimages containing the eyes, nose, mouth, chin, right eye, internal face (eyes, nose, mouth, chin), external face (hair, ears, contour) and the full face are extracted and represented as appearance-based data vectors. A greater number of face parts from two databases of face images (instead of only one) were considered with respect to previous related works, along with several classification rules. Experiments proved that single face parts offer enough information to allow discrimination between genders with recognition rates that can reach 86%, while classifiers based on the joint contribution of internal parts can achieve rates above 90%. The best result using the full face was similar to those reported in general papers of gender recognition (>95%). A high degree of correlation was found among classifiers as regards their capacity to measure the relevance of face parts, but results were strongly dependent on the composition of the database. Finally, an evaluation of the complementarity between discriminant information from pairs of face parts reveals a high potential to define effective combinations of classifiers.