Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445915
|View full text |Cite
|
Sign up to set email alerts
|

Fair Classification with Group-Dependent Label Noise

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 47 publications
(46 citation statements)
references
References 14 publications
0
46
0
Order By: Relevance
“…Group-dependent T (X) Recent results have also studied the case that the data X can be grouped using additional information (Wang et al, 2021a;Wang et al, 2021b). For instance, (Wang et al, 2021a; consider the setting where the data can be grouped by the associated "sensitive information", e.g., by age, gender, or race. Then the noise transition matrix remains the same for Xs that come from each group.…”
Section: Noise Clusterabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…Group-dependent T (X) Recent results have also studied the case that the data X can be grouped using additional information (Wang et al, 2021a;Wang et al, 2021b). For instance, (Wang et al, 2021a; consider the setting where the data can be grouped by the associated "sensitive information", e.g., by age, gender, or race. Then the noise transition matrix remains the same for Xs that come from each group.…”
Section: Noise Clusterabilitymentioning
confidence: 99%
“…The noise transition matrix T (X), defined as the transition probability between Ỹ and Y , plays a central role in this problem. Among many other benefits, the knowledge of T (X) has demonstrated its use in performing either risk (Natarajan et al, 2013;Patrini et al, 2017a), or label (Patrini et al, 2017a), or constraint corrections (Wang et al, 2021a). In beyond, it also find application in ranking small loss samples (Han et al, 2020) and detecting corrupted samples (Zhu et al, 2021a).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This means that if the ML algorithms had been trained on the modified training data, it would not have exhibited the unexpected or undesirable behavior or would have exhibited this behavior to a lesser degree. Explanations generated by our framework, which complement existing approaches in XAI, are crucial for helping system developers and ML practitioners to debug ML algorithms for data errors and bias in training data, such as measurement errors and misclassifications [35,42,94], data imbalance [27], missing data and selection bias [29,62,63], covariate shift [74,82], technical biases introduced during data preparation [85], and poisonous data points injected through adversarial attacks [36,43,65,83]. It is known in the algorithmic fairness literature that information about the source of bias is critically needed to build fair ML algorithms because no current bias mitigation solution fits all situations [27,31,36,82,94].…”
Section: Introductionmentioning
confidence: 99%
“…More compact and coherent descriptions are needed. Furthermore, sources of bias and discrimination in training data are typically not randomly distributed across different sub-populations; rather they manifest systematic errors in data collection, selection, feature engineering, and curation [29,35,42,62,63,70,94]. That is, more often than not, certain cohesive subsets of training data are responsible for bias.…”
Section: Introductionmentioning
confidence: 99%