2018
DOI: 10.1016/j.ins.2017.09.064
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting reject option in classification for social discrimination control

Abstract: Social discrimination is said to occur when an unfavorable decision for an individual is influenced by her membership to certain protected groups such as females and minority ethnic groups. Such discriminatory decisions often exist in historical data. Despite recent works in discrimination-aware data mining, there remains the need for robust, yet easily usable, methods for discrimination control. In this paper, we utilize reject option in classification, a general decision theoretic framework for handling inst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
42
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(43 citation statements)
references
References 17 publications
0
42
0
Order By: Relevance
“…Two post-processing methods that optimise for EOD, are Equalised Odds (EO) [28,51] and Calibrated Equalised Odds [51]. Other post-processing approaches include the modification of the probability of positive decisions for Naive Bayes (NB) [14], leaf relabelling for Decision Trees (DT) [37], and further investigation of uncertain labels [39].…”
Section: Post-processing Post-processing Methods Change the Prediction Outcomes Of A Model To Mitigate Bias After The Model Has Been Traimentioning
confidence: 99%
See 3 more Smart Citations
“…Two post-processing methods that optimise for EOD, are Equalised Odds (EO) [28,51] and Calibrated Equalised Odds [51]. Other post-processing approaches include the modification of the probability of positive decisions for Naive Bayes (NB) [14], leaf relabelling for Decision Trees (DT) [37], and further investigation of uncertain labels [39].…”
Section: Post-processing Post-processing Methods Change the Prediction Outcomes Of A Model To Mitigate Bias After The Model Has Been Traimentioning
confidence: 99%
“…To compare the fairness-accuracy trade-off achieved by bias mitigation methods, practitioners either observe the fairness and accuracy changes in separate graphs, or visualise them in a 2dimensional graph (one dimension is accuracy, the other dimension is fairness) [12, 14-16, 24, 36, 37, 39, 41, 42, 51, 60]. The proposed mitigation methods are often compared with previous methods [16, 17, 37-41, 51, 61, 69], different configurations [12,14,24,[38][39][40], the original non-optimised classifier [12,15,36,61,62], or a classifier trained without using protected attributes [12,15,36,69]. [11] Interact Comput Evaluation of problem-solving software for gender-inclusiveness.…”
Section: Fairness-accuracy Trade-offmentioning
confidence: 99%
See 2 more Smart Citations
“…• Pre-processing algorithms: In this approach, before classification, data is pre-processed in such a way that discrimination or bias is reduced. Kamiran [38] which gives favorable outcomes to unprivileged groups and unfavorable outcomes to privileged groups within a confidence band around the decision boundary with the highest uncertainty.…”
Section: Removing Ethical Biasmentioning
confidence: 99%