The World Wide Web Conference 2019
DOI: 10.1145/3308558.3313559
|View full text |Cite
|
Sign up to set email alerts
|

Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

Abstract: As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE),… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 68 publications
(59 citation statements)
references
References 49 publications
0
59
0
Order By: Relevance
“…In predictive healthcare, the absence of causal relation can raise questions about the conclusions that can be drawn from outcomes of DL models. Furthermore, fairness in decision making can better be enforced through the lens of causal reasoning [117], [118]. The estimation of the causal effect of some variable(s) on a target output (e.g., target class in multi-class classification problem) is important to ensure fair predictions.…”
Section: ML For Healthcare: Challengesmentioning
confidence: 99%
“…In predictive healthcare, the absence of causal relation can raise questions about the conclusions that can be drawn from outcomes of DL models. Furthermore, fairness in decision making can better be enforced through the lens of causal reasoning [117], [118]. The estimation of the causal effect of some variable(s) on a target output (e.g., target class in multi-class classification problem) is important to ensure fair predictions.…”
Section: ML For Healthcare: Challengesmentioning
confidence: 99%
“…Beyond the criteria mentioned in section 5, numerous other fairness metrics have been proposed, such as procedural fairness (Grgić-Hlača et al, 2016 ) and causal effects (Madras et al, 2018 ; Khademi et al, 2019 ). Meanwhile, other papers have emphasized that simply satisfying a particular definition of fairness is no guarantee of the broader outcomes people care about, such as justice (Hu and Chen, 2018b ).…”
Section: Additional Related Workmentioning
confidence: 99%
“…Further, the system evaluation design mitigated machine learning bias by using observed behaviors (the observed act of deception and observed verbal/nonverbal behaviors during responses) to train classifiers. As the system expands to predict constructs beyond deception, such as interview performance or cultural fit, future research must incorporate methods to alleviate bias (e.g., gender or race discrimination) caused by training data [41]. This bias is the result of "garbage-in, garbage-out" effects; using training data derived from subjective human interpretation naturally includes the inherent biases of humans.…”
Section: Limitations and Future Workmentioning
confidence: 99%