2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00159
|View full text |Cite
|
Sign up to set email alerts
|

FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
197
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 336 publications
(199 citation statements)
references
References 44 publications
1
197
0
1
Order By: Relevance
“…A particular example is the recent Open AI CLIP [60] model which is a large scale model pre-trained on wide variety of images with language supervision. In its broader impact section, the authors present fairness evaluations of their model on harmful label associations and disparity in gender recognition using FairFace [39] dataset. However, these evaluations did not provide systematic protocols that can be followed for any pretrained model for assessing fairness such as geodiversity.…”
Section: Related Workmentioning
confidence: 99%
“…A particular example is the recent Open AI CLIP [60] model which is a large scale model pre-trained on wide variety of images with language supervision. In its broader impact section, the authors present fairness evaluations of their model on harmful label associations and disparity in gender recognition using FairFace [39] dataset. However, these evaluations did not provide systematic protocols that can be followed for any pretrained model for assessing fairness such as geodiversity.…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, datasets for training or evaluating face recognition algorithms may include annotations for fairness analysis [11,26]. In other cases, these annotations were initially curated as training data for attribute classifiers (e.g., Celeb-A [18], FairFace [14], UTKFace [32]) but may also be useful for studying bias [1,23].…”
Section: Identifying and Reducing Bias In Trained Modelsmentioning
confidence: 99%
“…We first study FairFace [13], a collection of 100,000 face images annotated with crowd-sourced labels about the perceived age, race, and gender of each image. FairFace is notable for being approximately balanced across 7 races and 2 genders.…”
Section: Fairfacementioning
confidence: 99%