Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18 2018
DOI: 10.1145/3178876.3186133
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification

Abstract: Machine learning bias and fairness have recently emerged as key issues due to the pervasive deployment of data-driven decision making in a variety of sectors and services. It has often been argued that unfair classifications can be attributed to bias in training data, but previous attempts to "repair" training data have led to limited success. To circumvent shortcomings prevalent in data repairing approaches, such as those that weight training samples of the sensitive group (e.g. gender, race, financial status… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
76
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 105 publications
(76 citation statements)
references
References 20 publications
0
76
0
Order By: Relevance
“…In-processing methods modify the learning algorithm to eliminate discriminatory behavior. These interventions are typically learner-specific [5], [9]- [12]. For instance, Zafar et al [5] add fairness-related constraints in the objective function of a logistic regression model to account for fairness.…”
Section: Related Workmentioning
confidence: 99%
“…In-processing methods modify the learning algorithm to eliminate discriminatory behavior. These interventions are typically learner-specific [5], [9]- [12]. For instance, Zafar et al [5] add fairness-related constraints in the objective function of a logistic regression model to account for fairness.…”
Section: Related Workmentioning
confidence: 99%
“…While there is a heavy literature on model interpretation [185], [186], it is not clear how to address feedback on the data level. In the model fairness literature [187], one approach to reducing unfairness is to fix the data. In data cleaning, ActiveClean and BoostClean are interesting approaches for fixing the data to improve model accuracy.…”
Section: Future Research Challengesmentioning
confidence: 99%
“…Under such circumstances, one may wish to transit to a new model that inherits the original predictive performance but which ensures non-discriminatory outputs. A possible option is to edit the sensitive attributes to remove any bias, therefore reducing the disparate impact in the task , and then training a new model on the edited dataset [ 62 , 63 ]. Alternatively, in very specific scenarios where the sensitive information is not leaked through additional features, it is possible to build a copy by removing the protected data variables [ 64 ].…”
Section: Differential Replication In Practicementioning
confidence: 99%