Companion Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366424.3383556
|View full text |Cite
|
Sign up to set email alerts
|

Convex Fairness Constrained Model Using Causal Effect Estimators

Abstract: Recent years have seen much research on fairness in machine learning. Here, mean difference (MD) or demographic parity is one of the most popular measures of fairness. However, MD quantifies not only discrimination but also explanatory bias which is the difference of outcomes justified by explanatory features. In this paper, we devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias. The models are based on estimators of causal effect utilizing propensity score analysis… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…-Tasks in fairness literature: fair classification (Yang et al, 2020a;Sharifi-Malvajerdi et al, 2019;Heidari et al, 2018;Lohaus et al, 2020;Cotter et al, 2019;Creager et al, 2019;Cotter et al, 2018), fair regression evaluation (Heidari et al, 2019a), fair few-shot learning (Slack et al, 2020(Slack et al, , 2019a, rich-subgroup fairness evaluation (Kearns et al, 2019), rich-subgroup fair classification (Kearns et al, 2018), fair regression (Chzhen et al, 2020a,b;Romano et al, 2020;Agarwal et al, 2019;Mary et al, 2019;Komiyama et al, 2018;Ogura and Takeda, 2020;Berk et al, 2017), fair representation learning (Ruoss et al, 2020), robust fair classification (Mandal et al, 2020) -Tasks in fairness literature: fair classification (He et al, 2020b;Sharma et al, 2020b;Goel et al, 2018;Oneto et al, 2019a;Celis et al, 2019b;Canetti et al, 2019;Cho et al, 2020;Savani et al, 2020;Donini et al, 2018;Heidari et al, 2018;Russell et al, 2017;Quadrianto and Sharmanska, 2017;Calmon et al, 2017;DiCiccio et al, 2020;Xu et al, 2020;Vargo et al, 2021;Roh et al, 2021;Maity et al, 2021;…”
Section: A36 Communities and Crimementioning
confidence: 99%
“…-Tasks in fairness literature: fair classification (Yang et al, 2020a;Sharifi-Malvajerdi et al, 2019;Heidari et al, 2018;Lohaus et al, 2020;Cotter et al, 2019;Creager et al, 2019;Cotter et al, 2018), fair regression evaluation (Heidari et al, 2019a), fair few-shot learning (Slack et al, 2020(Slack et al, , 2019a, rich-subgroup fairness evaluation (Kearns et al, 2019), rich-subgroup fair classification (Kearns et al, 2018), fair regression (Chzhen et al, 2020a,b;Romano et al, 2020;Agarwal et al, 2019;Mary et al, 2019;Komiyama et al, 2018;Ogura and Takeda, 2020;Berk et al, 2017), fair representation learning (Ruoss et al, 2020), robust fair classification (Mandal et al, 2020) -Tasks in fairness literature: fair classification (He et al, 2020b;Sharma et al, 2020b;Goel et al, 2018;Oneto et al, 2019a;Celis et al, 2019b;Canetti et al, 2019;Cho et al, 2020;Savani et al, 2020;Donini et al, 2018;Heidari et al, 2018;Russell et al, 2017;Quadrianto and Sharmanska, 2017;Calmon et al, 2017;DiCiccio et al, 2020;Xu et al, 2020;Vargo et al, 2021;Roh et al, 2021;Maity et al, 2021;…”
Section: A36 Communities and Crimementioning
confidence: 99%