2020
DOI: 10.3389/frai.2020.00033
|View full text |Cite
|
Sign up to set email alerts
|

Tuning Fairness by Balancing Target Labels

Abstract: The issue of fairness in machine learning models has recently attracted a lot of attention as ensuring it will ensure continued confidence of the general public in the deployment of machine learning systems. We focus on mitigating the harm incurred by a biased machine learning system that offers better outputs (e.g., loans, job interviews) for certain groups than for others. We show that bias in the output can naturally be controlled in probabilistic models by introducing a latent target output. This formulati… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…If bias is observed in the AI model as described above, then the researcher should return to the modeldevelopment stage and apply model in-processing or postprocessing bias-mitigation strategies (Berk et al, 2017;Gorrostieta et al, 2019;Kamishima et al, 2011;Woodworth et al, 2017;Zafar et al, 2017aZafar et al, , 2017b. For example, researchers could transform data, inject or recover noise (Calmon et al, 2017;Zhang et al, 2018), relabel the data to ensure an equal proportion of positive predictions for the sensitive group and its counterparts (Hardt et al, 2016;Luong et al, 2011), reweigh labels before training (Feldman et al, 2015;Kamiran & Calders, 2012;Luong et al, 2011), control target labels via a latent output (Kehrenberg et al, 2020), apply fairness regularization, penalize the mutual information between the sensitive feature and the classifier predictions (Kamishima et al, 2012), or add constraints to the loss functions that require satisfying a proxy for equalized odds or disparate impact (Woodworth et al, 2017;Zafar et al, 2017aZafar et al, , 2017b. See Figure 2 for a general framework for applying bias-mitigation techniques.…”
Section: Bias Mitigationmentioning
confidence: 99%
See 1 more Smart Citation
“…If bias is observed in the AI model as described above, then the researcher should return to the modeldevelopment stage and apply model in-processing or postprocessing bias-mitigation strategies (Berk et al, 2017;Gorrostieta et al, 2019;Kamishima et al, 2011;Woodworth et al, 2017;Zafar et al, 2017aZafar et al, , 2017b. For example, researchers could transform data, inject or recover noise (Calmon et al, 2017;Zhang et al, 2018), relabel the data to ensure an equal proportion of positive predictions for the sensitive group and its counterparts (Hardt et al, 2016;Luong et al, 2011), reweigh labels before training (Feldman et al, 2015;Kamiran & Calders, 2012;Luong et al, 2011), control target labels via a latent output (Kehrenberg et al, 2020), apply fairness regularization, penalize the mutual information between the sensitive feature and the classifier predictions (Kamishima et al, 2012), or add constraints to the loss functions that require satisfying a proxy for equalized odds or disparate impact (Woodworth et al, 2017;Zafar et al, 2017aZafar et al, , 2017b. See Figure 2 for a general framework for applying bias-mitigation techniques.…”
Section: Bias Mitigationmentioning
confidence: 99%
“…For biased labels, a researcher could relabel the data to ensure an equal proportion of positive predictions for the sensitive group and its counterparts (Hardt et al, 2016; Luong et al, 2011). Other techniques for improving problematic models include reweighing labels before training (Feldman et al, 2015; Kamiran & Calders, 2012; Luong et al, 2011) and controlling target labels via a latent output (Kehrenberg et al, 2020). It is critical that members of sensitive classes provide feedback on assigned labels, especially when those labels are subjective and culturally grounded (e.g., coding tweets for abusive language or labeling audio recordings for episodes of conflict).…”
Section: Assessing and Mitigating Bias In Aimentioning
confidence: 99%
“…Similarly, Iosifidis et al [34] used clustering across sensitive attribute and labels to come up with representative training data to train models, and Kamiran et al [37] explored multiple techniques involving sampling and re-weighing of training instances as pre-processing steps before applying machine learning models. Other popular preprocessing techniques involve relabelling and perturbation [39], details of which we omit from the paper.…”
Section: Pre-processingmentioning
confidence: 99%
“…Moreover, they often require discretization of numeric sensitive features such as age, which can alter bias measures' outputs [7]. Individualbased fairness measures require strong assumptions such as the availability of an agreed-upon similarity metric, or knowledge of the underlying data generating process [13]. Additionally, they act as bias proxies as they do not measure bias directly.…”
Section: Introductionmentioning
confidence: 99%