Proceedings of the 2016 SIAM International Conference on Data Mining 2016
DOI: 10.1137/1.9781611974348.17
|View full text |Cite
|
Sign up to set email alerts
|

A Confidence-Based Approach for Balancing Fairness and Accuracy

Abstract: We study three classical machine learning algorithms in the context of algorithmic fairness: adaptive boosting, support vector machines, and logistic regression. Our goal is to maintain the high accuracy of these learning algorithms while reducing the degree to which they discriminate against individuals because of their membership in a protected group.Our first contribution is a method for achieving fairness by shifting the decision boundary for the protected group. The method is based on the theory of margin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
124
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 157 publications
(124 citation statements)
references
References 12 publications
0
124
0
Order By: Relevance
“…For instance, Zafar et al [5] add fairness-related constraints in the objective function of a logistic regression model to account for fairness. Post-processing methods try to modify the model's predictions or decision boundary in order to ensure fairness [10], [13], [14]. Kamiran et al [10] propose a fair decision tree learner that combines a fairness-aware splitting criterion with post-processing leaf-relabeling.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, Zafar et al [5] add fairness-related constraints in the objective function of a logistic regression model to account for fairness. Post-processing methods try to modify the model's predictions or decision boundary in order to ensure fairness [10], [13], [14]. Kamiran et al [10] propose a fair decision tree learner that combines a fairness-aware splitting criterion with post-processing leaf-relabeling.…”
Section: Related Workmentioning
confidence: 99%
“…Kamiran et al [10] propose a fair decision tree learner that combines a fairness-aware splitting criterion with post-processing leaf-relabeling. Fish et al [13] adjust the decision boundary of a boosting model based on the confidence scores of the misclassified instances. Finally, class-imbalance methods aim to deal with skewed class distributions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent works [17,44] have occasionally proposed similar iterative methods as baselines to compare themselves to. However, our work differs in that it employs an inferred rather than a heuristic model to produce bias-related probabilities (see Section 4).…”
Section: Algorithm 1 Adaptive Sensitive Reweightingmentioning
confidence: 99%
“…baselines employed by [17,44]) propose that weights in Eq. 7a should be proportional to classifier error.…”
Section: Weighting By Error Is Inadequatementioning
confidence: 99%