2019
DOI: 10.2139/ssrn.3309776
|View full text |Cite
|
Sign up to set email alerts
|

Eliminating Latent Discrimination: Train Then Mask

Abstract: How can we control for latent discrimination in predictive models? How can we provably remove it? Such questions are at the heart of algorithmic fairness and its impacts on society. In this paper, we define a new operational fairness criteria, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics. Our notion of fairness effectively controls for sensitive features and provides diagnostics for deviations from fair decision making. We then establish analytical and algorith… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
(25 reference statements)
0
2
0
Order By: Relevance
“…The inclusion of sensitive data should be based on the potential for latent discrimination even in the absence of sensitive data, the relative availability and completeness of sensitive attributes, a priori knowledge of which sensitive features are responsible for bias, and many other related factors. 112 , 113 Uniformly defining which features should or should not be included in a model is overly restrictive. Our checklist was designed to give model developers a framework with which to discuss these sensitive yet important topics.…”
Section: Discussionmentioning
confidence: 99%
“…The inclusion of sensitive data should be based on the potential for latent discrimination even in the absence of sensitive data, the relative availability and completeness of sensitive attributes, a priori knowledge of which sensitive features are responsible for bias, and many other related factors. 112 , 113 Uniformly defining which features should or should not be included in a model is overly restrictive. Our checklist was designed to give model developers a framework with which to discuss these sensitive yet important topics.…”
Section: Discussionmentioning
confidence: 99%
“…The work on this realm is focused on biased data or biased algorithms, however, using these biased algorithms in decision-making systems would lead into generating more biased data. This makes the causality of the fairness problem more complicated that exacerbates the problem even further (Barocas et al, 2017;Ghili et al, 2019).…”
Section: Introductionmentioning
confidence: 99%