2020
DOI: 10.1007/978-3-030-58574-7_34
|View full text |Cite
|
Sign up to set email alerts
|

Null-Sampling for Interpretable and Fair Representations

Abstract: We propose to learn invariant representations, in the data domain, to achieve interpretability in algorithmic fairness. Invariance implies a selectivity for high level, relevant correlations w.r.t. class label annotations, and a robustness to irrelevant correlations with protected characteristics such as race or gender. We introduce a non-trivial setup in which the training set exhibits a strong bias such that class label annotations are irrelevant and spurious correlations cannot be distinguished. To address … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 8 publications
0
8
0
Order By: Relevance
“…Interpreting the representations We now show that the bijectivity of FNF enables interpretability analyses, an active area of research in fair representation learning [71][72][73]. To that end, we consider the Crime dataset where for a community x: (i) race (non-white vs. white majority) is the sensitive attribute a, (ii) the percentage of whites (highly correlated with, but not entirely predictive of the sensitive attribute) is in the feature set, and (iii) the label y strongly correlates with the sensitive attribute.…”
Section: Comparison With Adversarial Trainingmentioning
confidence: 99%
“…Interpreting the representations We now show that the bijectivity of FNF enables interpretability analyses, an active area of research in fair representation learning [71][72][73]. To that end, we consider the Crime dataset where for a community x: (i) race (non-white vs. white majority) is the sensitive attribute a, (ii) the percentage of whites (highly correlated with, but not entirely predictive of the sensitive attribute) is in the feature set, and (iii) the label y strongly correlates with the sensitive attribute.…”
Section: Comparison With Adversarial Trainingmentioning
confidence: 99%
“…Fair representation learning A wide range of methods have been proposed to learn fair representations of user data. Most of these works consider group fairness and employ techniques such as adversarial learning [17,38,49,53], disentanglement [12,51,66], duality [70], low-rank matrix factorization [60], and distribution alignment [3,52,85]. Individually fair representation learning has recently gained attention, with similarity metrics based on logical formulas [65], Wasserstein distance [22,46], fairness graphs [47], and weighted ℓ p -norms [84].…”
Section: Related Workmentioning
confidence: 99%
“…Omniglot [38] OM [41] Handwritten chars Few-shot MiniImagenet [70] MI [44] ImageNet subset Few-shot Labeled Faces in the Wild [27] LFW [8] Faces Diversity -Faces UCF101 [64] UCF [28] Action videos Diversity -Actions Imagenet-R [23] IR [52] ImageNet art Diversity -Art Imagenet-Sketch [71] IS [71] Imagenet sketches Diversity -Sketch Indoor Scene Recognition [51] ISR [56] Indoor location Diversity -Indoor CIFAR10 [37] C10 [14] 10 classes Diversity Imagenet-A [24] IA [52] Difficult images Robustness Colorectal Histology [30] CH [16] Medical Images AI for good Multi-label CelebA Attributes [43] CAA [31] 40 attribute Label overlap UTK Faces [75] UTK [29] Gender, age, race Label overlap Yale Faces [15] YF [32] 11 face labels Label overlap Common Objects in Context [40] COCO [59] 90 objects Object Detection iMaterialist (Fashion) [18] IM [18] Fashion/apparel Fine-grained We randomly sample classes and data points from each dataset 100 times and average the results.…”
Section: Sota Content Selection Reason Singlementioning
confidence: 99%