2019
DOI: 10.48550/arxiv.1905.13662
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Fairness of Disentangled Representations

Abstract: Recently there has been a significant interest in learning disentangled representations, as they promise increased interpretability, generalization to unseen scenarios and faster learning on downstream tasks. In this paper, we investigate the usefulness of different notions of disentanglement for improving the fairness of downstream prediction tasks based on representations. We consider the setting where the goal is to predict a target variable based on the learned representation of high-dimensional observatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(20 citation statements)
references
References 33 publications
0
20
0
Order By: Relevance
“…Similar to separating content and style in natural images, disentanglement can potentially be used as an unsupervised approach for isolating protected attribution information such as ethnicity, which can be subsequently truncated from the latent code for downstream fairness tasks ?, 231 . Recent exhaustive studies on the evaluation of unsupervised VAE-based disentangled models have demonstrated that disentanglement scores correlate with fairness metrics, benchmarked on numerous fair classification tasks without protected attribute information 232 . In application to face identification, disentanglement-like methods have been proposed in clustering human faces without latent code information that contain dominant features such as as skin and hair color 233 .…”
Section: Fair Representation Learning Via Disentanglementmentioning
confidence: 99%
“…Similar to separating content and style in natural images, disentanglement can potentially be used as an unsupervised approach for isolating protected attribution information such as ethnicity, which can be subsequently truncated from the latent code for downstream fairness tasks ?, 231 . Recent exhaustive studies on the evaluation of unsupervised VAE-based disentangled models have demonstrated that disentanglement scores correlate with fairness metrics, benchmarked on numerous fair classification tasks without protected attribute information 232 . In application to face identification, disentanglement-like methods have been proposed in clustering human faces without latent code information that contain dominant features such as as skin and hair color 233 .…”
Section: Fair Representation Learning Via Disentanglementmentioning
confidence: 99%
“…Several debiasing methods have been proposed for existing models, the most simple being to remove subgroup indicators [10]. Others have used reinforcement learning methods to control the model disparity level [9,41], adversarial learning to generate a debiased input data [40,47,51], or target constructing a fair latent representation [1,24,26,50]. In this work we do not consider debiasing.…”
Section: Background and Related Workmentioning
confidence: 99%
“…This is demonstrably problematic, particularly because of the arbitrary nature of the functions being learned. It can result in bias against minority groups, and algorithmic decisions which are heavily influenced by culturally sensitive or legally protected characteristics such as race, age, gender, or sex (Hardt et al, 2016;Locatello et al, 2019;Cao and Daume III, 2019;Liu et al, 2019;Howard and Borenstein, 2018;Rose, 2010;Buolamwini and Gebru, 2018). In response to concern surrounding bias and opacity of ML driven decision Unfortunately, and as we will show, machine learning models and model explainability techniques cannot be used reliably to infer correlations, and the purview of the explainability should be restricted to be completely local to the model.…”
Section: Introductionmentioning
confidence: 99%