Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445887
|View full text |Cite
|
Sign up to set email alerts
|

Mitigating Bias in Set Selection with Noisy Protected Attributes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(29 citation statements)
references
References 29 publications
0
29
0
Order By: Relevance
“…This effect has been observed multiple times in the literature [22,32,35,39]. Specific solutions have been proposed, e.g., when only the protected attribute itself is corrupted or noisy [7,31,36,49]. However, as shown in [28], full protection against malicious manipulations of the training data is provably impossible when only one dataset is available.…”
Section: Preliminaries and Related Work 21 Fair Classificationmentioning
confidence: 99%
“…This effect has been observed multiple times in the literature [22,32,35,39]. Specific solutions have been proposed, e.g., when only the protected attribute itself is corrupted or noisy [7,31,36,49]. However, as shown in [28], full protection against malicious manipulations of the training data is provably impossible when only one dataset is available.…”
Section: Preliminaries and Related Work 21 Fair Classificationmentioning
confidence: 99%
“…Most of the recent works utilize average performance indicators broken down by sensitive groups. The metrics themselves measure disparities between groups in terms of predicted positive value (PPV), True/False positive rates (TPR, FPR) [5], error rates [11] or risk difference [57]. There is no consensus nor general guidance for choosing the metric depending on context.…”
Section: Further Context and Related Workmentioning
confidence: 99%
“…as they might be if labelled using crowdsourced workers) and give conditions (such as conditional independence of the noisy sensitive features and the predicted label) under which post-processing a fixed classifier to equalize false positive or negative rates as measured under the proxy will reduce the true disparity between false positive or negative rates subject to the true sensitive features. In similar noise models, propose robust optimization based approaches to fairness constrained training with noisy sensitive features and Mehrotra and Celis [2021] consider the problem of fair subset selection. Lahoti et al [2020] propose to solve a minimax optimization problem over an enormous set of "computationally identifiable" subgroups, under the premise that if there exists a good proxy for a sensitive feature, then it will be included as one of these computationally identifiable groups defined with respect to the other features.…”
Section: Related Workmentioning
confidence: 99%