2020
DOI: 10.48550/arxiv.2004.01355
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Fairness is becoming an important issue to consider in the design of learning algorithms. A common strategy to make an algorithm fair is to remove the influence of one/more protected attributes when training the models, see [28]. Most methods assume that the labels of protected attributes are known during training but this may not always be possible.…”
Section: Background and Motivationmentioning
confidence: 99%
See 2 more Smart Citations
“…Fairness is becoming an important issue to consider in the design of learning algorithms. A common strategy to make an algorithm fair is to remove the influence of one/more protected attributes when training the models, see [28]. Most methods assume that the labels of protected attributes are known during training but this may not always be possible.…”
Section: Background and Motivationmentioning
confidence: 99%
“…There are up to 40 labels, each of which is binary-valued. Here, we follow [28] to focus on the attactiveness attribute (which we want to train a classifier to predict) and the gender is treated as "protected" since it may lead to an unfair classifier according to [28].…”
Section: Background and Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…These tasks involve understanding invariance properties of data representations and/or parameters of the model we wish to learn. While mechanisms to control for extraneous variables are not strictly necessary in typical supervised learning tasks, where one focuses on predictive accuracy, over the last few years, many results have indicated how it can be quite useful (Lokhande et al 2020). For instance, controlling the influence of a protected attribute such as race or gender on a response variable such as credit worthiness enables the design…”
mentioning
confidence: 99%