Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society 2021
DOI: 10.1145/3461702.3462592
|View full text |Cite
|
Sign up to set email alerts
|

Rawlsian Fair Adaptation of Deep Learning Classifiers

Abstract: Group-fairness in classification aims for equality of a predictive utility across different sensitive sub-populations, e.g., race or gender. Equality or near-equality constraints in group-fairness often worsen not only the aggregate utility but also the utility for the least advantaged sub-population. In this paper, we apply the principles of Pareto-efficiency and least-difference to the utility being accuracy, as an illustrative example, and arrive at the Rawls classifier that minimizes the error rate on the … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…The hardness of adapting Rawlsian principles into algorithms is apparent from these works. For example, Shah et al [16] propose a classifier that minimises the error rate of the worst-off sensitive group; they call this a Rawls classifier. Hashimoto et al [7] employ Rawlsian ideas to mitigate the amplification of representation disparity Fig.…”
Section: Rawlsian Ideas Of Fairness In MLmentioning
confidence: 99%
“…The hardness of adapting Rawlsian principles into algorithms is apparent from these works. For example, Shah et al [16] propose a classifier that minimises the error rate of the worst-off sensitive group; they call this a Rawls classifier. Hashimoto et al [7] employ Rawlsian ideas to mitigate the amplification of representation disparity Fig.…”
Section: Rawlsian Ideas Of Fairness In MLmentioning
confidence: 99%