2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA) 2020
DOI: 10.1109/icmla51294.2020.00167
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 43 publications
(17 citation statements)
references
References 13 publications
0
17
0
Order By: Relevance
“…By building models on data from a variety of different sites with increased awareness of the specific populations included, we may begin to mitigate the potential biases in our results. Ultimately, improvements in DL algorithms remain relatively nascent, and there has been an increased focus on classification performance across gender and race, providing us with impetus to ensure that DL algorithms can be successfully used to mitigate health care disparities based on demographics ( 35 37 ).…”
Section: Data: What Are We Using and For What Purpose?mentioning
confidence: 99%
“…By building models on data from a variety of different sites with increased awareness of the specific populations included, we may begin to mitigate the potential biases in our results. Ultimately, improvements in DL algorithms remain relatively nascent, and there has been an increased focus on classification performance across gender and race, providing us with impetus to ensure that DL algorithms can be successfully used to mitigate health care disparities based on demographics ( 35 37 ).…”
Section: Data: What Are We Using and For What Purpose?mentioning
confidence: 99%
“…For example, Labeled Faces in the Wild is 83.5% white, and the IJB-A dataset, which was specifically created to emphasize geographic diversity, draws only 21.4% of its examples from faces with darker skin tones (Buolamwini & Gebru, 2018). Researchers (Krishnan et al, 2020) compared models trained on the UTKFace dataset with models trained on the FairFace dataset (which is designed to be balanced with respect to race), and found that across 3 model architectures, model results were substantially less biased after training FairFace (Krishnan et al, 2020). These findings underscore the need for better public, large scale datasets labeled with demographic data in order to enable further empirical study of algorithmic bias.…”
Section: Huge Datasets and Algorithmic Fairnessmentioning
confidence: 99%
“…Further complicating the problem, even an ideal training dataset devoid of any biases does not guarantee that the trained ML model will be bias-free [348]: model underspecification [137], model misspecification [349], and biological differences that render the prediction task easier or harder in different groups [350] may still introduce arbitrary biases. Similarly, Hooker [348] discusses the impact of seemingly innocent technical measures and design choices on the fairness of the resulting model.…”
Section: Obstacles To Achieving Fairnessmentioning
confidence: 99%