2020 IEEE Winter Applications of Computer Vision Workshops (WACVW) 2020
DOI: 10.1109/wacvw50321.2020.9096947
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Gender Inequality In Face Recognition Accuracy

Abstract: We present a comprehensive analysis of how and why face recognition accuracy differs between men and women. We show that accuracy is lower for women due to the combination of (1) the impostor distribution for women having a skew toward higher similarity scores, and (2) the genuine distribution for women having a skew toward lower similarity scores. We show that this phenomenon of the impostor and genuine distributions for women shifting closer towards each other is general across datasets of African-American, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
56
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(57 citation statements)
references
References 26 publications
1
56
0
Order By: Relevance
“…Computer vision datasets are often found to be biased [64,76]. Human face datasets are particularly scrutinized [2,20,43,45,46,54] because methods and models trained on these data can end up being biased along attributes that are protected by the law [44]. Approaches to mitigating dataset bias include collecting more thorough examples [54], using image synthesis to compensate for distribution gaps [46], and example resampling [48].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Computer vision datasets are often found to be biased [64,76]. Human face datasets are particularly scrutinized [2,20,43,45,46,54] because methods and models trained on these data can end up being biased along attributes that are protected by the law [44]. Approaches to mitigating dataset bias include collecting more thorough examples [54], using image synthesis to compensate for distribution gaps [46], and example resampling [48].…”
Section: Related Workmentioning
confidence: 99%
“…Recent work uses generative models to explore face classification system biases. One study explores how variations in pose and lighting affect classifier performance [2,45,46]. A second study uses a generative model to synthesize faces along particular attribute directions [19].…”
Section: Related Workmentioning
confidence: 99%
“…Several studies have concluded that face recognition is more accurate in men than in women (Buolamwini and Gebru, 2018;Albiero et al, 2020). Accordingly, our CNN committed errors in 2 male and 3 female cases.…”
Section: Discussionmentioning
confidence: 64%
“…Face Gender Recognition (FGR) system is a major area for non-verbal language in day to day life communication. FGR systems have been attracted numerous researchers since they attempt to overcome the problems and factors weakening these systems including problem of images classification, also due to its large-scale applications in face analysis, particularly face recognition [1]. Gender based separation among humans is classified into two: male and female [2].…”
Section: Introductionmentioning
confidence: 99%
“…It based on two-dimensional images of human subjects. Currently gender classification and recognition from facial imagery has grown its importance in the computer vision field: It play a very important function in many fields likes, face recognition [1][3], forensic crime detection [4] [5], facial emotion recognition [3] and psychologically affected patients [6], night surveillance [7] and Artificial Intelligence [8] [9] and soon. In this paper it can be used to identify; fastly; a criminal person from his sketch for purposes of identification [10].…”
Section: Introductionmentioning
confidence: 99%