2022
DOI: 10.48550/arxiv.2202.03078
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fair Interpretable Representation Learning with Correction Vectors

Abstract: Neural network architectures have been extensively employed in the fair representation learning setting, where the objective is to learn a new representation for a given vector which is independent of sensitive information. Various representation debiasing techniques have been proposed in the literature. However, as neural networks are inherently opaque, these methods are hard to comprehend, which limits their usefulness. We propose a new framework for fair representation learning that is centered around the l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…1) Intrinsic: Here, we restrict to include those intrinsic methods that explicitly considered fairness notions in their study [49]- [62].…”
Section: A Result-oriented Fairnessmentioning
confidence: 99%
See 2 more Smart Citations
“…1) Intrinsic: Here, we restrict to include those intrinsic methods that explicitly considered fairness notions in their study [49]- [62].…”
Section: A Result-oriented Fairnessmentioning
confidence: 99%
“…Cerrato et al [49] introduced the concept of correction vectors to enhance interpretability in fair representation learning. These correction vectors can be computed explicitly, by modifying an existing neural network through the use of a Gradient Reversal Layer (GRL) to achieve interpretability.…”
Section: A Result-oriented Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…However, one problem with learning latent representations is their explainability. The projection into a latent space makes it difficult to investigate why the decision was made [7].…”
Section: Introductionmentioning
confidence: 99%