2021
DOI: 10.48550/arxiv.2110.10200
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

fairadapt: Causal Reasoning for Fair Data Pre-processing

Abstract: Machine learning algorithms are useful for various predictions tasks, but they can also learn how to discriminate, based on gender, race or other sensitive attributes. This realization gave rise to the field of fair machine learning, which aims to measure and mitigate such algorithmic bias. This manuscript describes the R-package fairadapt, which implements a causal inference pre-processing method. By making use of a causal graphical model and the observed data, the method can be used to address hypothetical q… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…These datasets contain sensitive attributes such as gender and race, which were utilised to measure the fairness of the clustering algorithms. The pre-processed UCI Adult and COMPAS datasets were downloaded from the fairadapt package [Plečko et al, 2021] and the pre-processing procedure is described in detail in [Plečko and Meinshausen, 2020]. The datasets were pre-processed by excluding features such as relationship, final weight, education, capital gain, and capital loss.…”
Section: Methodsmentioning
confidence: 99%
“…These datasets contain sensitive attributes such as gender and race, which were utilised to measure the fairness of the clustering algorithms. The pre-processed UCI Adult and COMPAS datasets were downloaded from the fairadapt package [Plečko et al, 2021] and the pre-processing procedure is described in detail in [Plečko and Meinshausen, 2020]. The datasets were pre-processed by excluding features such as relationship, final weight, education, capital gain, and capital loss.…”
Section: Methodsmentioning
confidence: 99%
“…If one is also interested in constructing a new fair predictor before using Alg. 1 (instead of testing an existing one), one may use tools for causally removing discrimination, such as (Chiappa 2019) or (Plečko and Meinshausen 2020;Plečko, Bennett, and Meinshausen 2021). In Appendix E we show formally that a predictor Y satisfying the conditions of Alg.…”
Section: Reconciling Statistical and Predictive Paritymentioning
confidence: 99%