Proceedings of the 2017 ACM on Conference on Information and Knowledge Management 2017
DOI: 10.1145/3132847.3132938
|View full text |Cite
|
Sign up to set email alerts
|

Fa*ir

Abstract: In this work, we de ne and solve the Fair Top-k Ranking problem, in which we want to determine a subset of k candidates from a large pool of n k candidates, maximizing utility (i.e., select the "best" candidates) subject to group fairness criteria.Our ranked group fairness de nition extends group fairness using the standard notion of protected groups and is based on ensuring that the proportion of protected candidates in every pre x of the top-k ranking remains statistically above or indistinguishable from a g… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
42
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 320 publications
(42 citation statements)
references
References 25 publications
0
42
0
Order By: Relevance
“…The literature on fairness, accountability and transparency (FAT) is rapidly growing [35,59,87], accounting for its implications on social networks [84], risk score estimation [50,8], recommender system [51,89,15,75], resource allocation [54], individuals classification in order to prevent discrimination [26] and computational policy [77,41], using tools such as causal analysis [90], quantitative input influence [20] and machine learning [88,67].…”
Section: Fairness Accountability and Transparencymentioning
confidence: 99%
“…The literature on fairness, accountability and transparency (FAT) is rapidly growing [35,59,87], accounting for its implications on social networks [84], risk score estimation [50,8], recommender system [51,89,15,75], resource allocation [54], individuals classification in order to prevent discrimination [26] and computational policy [77,41], using tools such as causal analysis [90], quantitative input influence [20] and machine learning [88,67].…”
Section: Fairness Accountability and Transparencymentioning
confidence: 99%
“…In [13], we proposed a generative method to describe rankings that meet a particular fairness criterion (fairness probability f ) and are drawn from a dataset with a given proportion of members of a binary protected group (p). This method was used in FA*IR [14] to quantify fairness in every prefix of a top-k list. In our follow-up work (working paper), we are developing a pairwise measure that directly models the probability that a member of a protected group is preferred to a member of the non-protected group.…”
Section: Fairnessmentioning
confidence: 99%
“…We select a binary version of the department size attribute DeptSizeBin from the CS departments dataset as the sensitive attribute, and treat both values ("large" and "small") as protected features. The summary view of the Fairness widget in our example presents the output of three fairness measures: FA*IR [14], proportion [15], and our own pairwise measure. All these measures are statistical tests, and whether a result is fair is determined by the computed p-value.…”
Section: Fairnessmentioning
confidence: 99%
“…For example, algorithms trained on a dataset representing one population typically only work well for that population (Torralba & Efros, ); e.g., lgorithms trained on data from sighted people do not work well for data from people with visual impairments (Gurari et al, ). Additionally, algorithms learn to reinforce biases that are present in datasets, as exemplified in use cases such as for university admission (Dwork, Hardt, Pitassi, Reingold, & Zemel, ), resume search (Zehlike et al, ), facial recognition (Buolamwini & Gebru, ), and criminal sentence determination (Chouldechova, ) where algorithms learned to discriminate against different user groups (including with respect to gender, race and ethnicity). Finally, biased datasets representing distinct populations can be used maliciously to train algorithms to identify and subsequently discriminate against that group (Narayanan & Shmatikov, ; Thomas & Kovashka, ).…”
Section: Introductionmentioning
confidence: 99%