2020
DOI: 10.1609/aaai.v34i04.5970
|View full text |Cite
|
Sign up to set email alerts
|

Pairwise Fairness for Ranking and Regression

Abstract: We present pairwise fairness metrics for ranking models and regression models that form analogues of statistical fairness notions such as equal opportunity, equal accuracy, and statistical parity. Our pairwise formulation supports both discrete protected groups, and continuous protected attributes. We show that the resulting training problems can be efficiently and effectively solved using existing constrained optimization and robust optimization techniques developed for fair classification. Experiments illust… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
68
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(68 citation statements)
references
References 8 publications
0
68
0
Order By: Relevance
“…The fairness of the models is evaluated via the rND metric (Section IV-A1) and explicitly by the accuracy obtained when predicting the sensitive attribute from the representations they learn. We also report the group-dependent pairwise accuracies (Section IV-A2), which have been recently developed [31]. We explored a number of hyperparameter combinations, which we report in Table I.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The fairness of the models is evaluated via the rND metric (Section IV-A1) and explicitly by the accuracy obtained when predicting the sensitive attribute from the representations they learn. We also report the group-dependent pairwise accuracies (Section IV-A2), which have been recently developed [31]. We explored a number of hyperparameter combinations, which we report in Table I.…”
Section: Methodsmentioning
confidence: 99%
“…Singh and Joachims [33] have instead focused on average exposure, which is defined as the average probabilities of all individuals belonging to a specific group to be ranked at the top of the list. More recently, Narasimhan et al [31] have argued for ranking fairness to be measured, in continuity with the disparate mistreatment concept in classification, as the difference in rank accuracy between different groups (Section IV-A2). Learning frameworks in fair ranking have so far been focused on post-processing methods [43], [8], [33].…”
Section: B Fairnessmentioning
confidence: 99%
See 2 more Smart Citations
“…(referred to as sensitive attributes). This has motivated researchers to investigate techniques for ensuring that learned models satisfy fairness properties (Dwork et al 2012;Feldman et al 2015;Goh et al 2016;Chouldechova 2017;Corbett-Davies et al 2017;Gajane and Pechenizkiy 2017;Kusner et al 2017;Cotter et al 2018;Mitchell, Potash, and Barocas 2018;Verma and Rubin 2018;Zhang and Bareinboim 2018;Chiappa and Isaac 2019;Narasimhan et al 2020). Most often, fairness desiderata are expressed as constraints on the lower order moments or other functions of distributions corresponding to different sensitive attributes.…”
Section: Introductionmentioning
confidence: 99%