2002
DOI: 10.1109/34.1000244
|View full text |Cite
|
Sign up to set email alerts
|

Learning gender with support faces

Abstract: AbstractÐNonlinear Support Vector Machines (SVMs) are investigated for appearance-based gender classification with low-resolution ªthumbnailº faces processed from 1,755 images from the FERET face database. The performance of SVMs (3.4 percent error) is shown to be superior to traditional pattern classifiers (linear, quadratic, Fisher linear discriminant, nearest-neighbor) as well as more modern techniques such as Radial Basis Function (RBF) classifiers and large ensemble-RBF networks. Furthermore, the differen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

8
335
2
15

Year Published

2005
2005
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 532 publications
(360 citation statements)
references
References 22 publications
8
335
2
15
Order By: Relevance
“…The state-ofthe-art recognition rate for the Color FERET database (Phillips et al, 2000) involving frontal faces with frontal illumination and 5 fold cross-validation is around 93% using either a Support Vector Machine with Radial Basis function (Moghaddam and Yang, 2002), pair-wise comparison of pixel values within a boosting framework (Baluja and Rowley, 2007) or linear discriminant techniques (Bekios-Calfa et al, 2011). This performance drops significantly if classifiers are trained and tested on different databases.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The state-ofthe-art recognition rate for the Color FERET database (Phillips et al, 2000) involving frontal faces with frontal illumination and 5 fold cross-validation is around 93% using either a Support Vector Machine with Radial Basis function (Moghaddam and Yang, 2002), pair-wise comparison of pixel values within a boosting framework (Baluja and Rowley, 2007) or linear discriminant techniques (Bekios-Calfa et al, 2011). This performance drops significantly if classifiers are trained and tested on different databases.…”
Section: Introductionmentioning
confidence: 99%
“…Gender is perhaps the most widely studied facial demographic attribute in the Computer Vision field (Moghaddam and Yang, 2002;Baluja and Rowley, 2007;Mäkinen and Raisamo, 2008;Bekios-Calfa et al, 2011). The state-ofthe-art recognition rate for the Color FERET database (Phillips et al, 2000) involving frontal faces with frontal illumination and 5 fold cross-validation is around 93% using either a Support Vector Machine with Radial Basis function (Moghaddam and Yang, 2002), pair-wise comparison of pixel values within a boosting framework (Baluja and Rowley, 2007) or linear discriminant techniques (Bekios-Calfa et al, 2011).…”
Section: Introductionmentioning
confidence: 99%
“…Experiment of this approach conduct FERET Database and achieved accuracy are 96% on the gender classification task and 94% on the ethnic classification task. C.F Lin [10], presented an approach based on fuzzy support vector machine with good generalization ability. The fuzzy membership function assigned to each input face feature data the degree of one human face is belonging to male or female face The aim of the fuzzification in FSVM is that different contributions to the learning of the decision surface.…”
Section: B Gender Classification: -mentioning
confidence: 99%
“…A recent approach based on perfectly aligned images outperforms humans in low resolution images [19]. In [18] a Gabor wavelet representation on selected points is used with good results in gender and race classi cation.…”
Section: Facial Descriptionmentioning
confidence: 99%
“…However, other facial descriptors which are particularly useful to describe unknown individuals, or to realize changes in human appearance during social interaction, have not excited the interest of the researchers similarly. Certainly, gender classi cation and facial expression recognition are exceptions [19,21]; but other descriptors such as race, glasses, moustaches, beards, hair color, hair style, eyes color, etc., have not been widely considered.…”
Section: Introductionmentioning
confidence: 99%