2020
DOI: 10.48550/arxiv.2010.04053
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fairness in Machine Learning: A Survey

Simon Caton,
Christian Haas

Abstract: As Machine Learning technologies become increasingly used in contexts that affect citizens, companies as well as researchers need to be confident that their application of these methods will not have unexpected social implications, such as bias towards gender, ethnicity, and/or people with disabilities. There is significant literature on approaches to mitigate bias and promote fairness, yet the area is complex and hard to penetrate for newcomers to the domain. This article seeks to provide an overview of the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
145
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 93 publications
(146 citation statements)
references
References 153 publications
(372 reference statements)
0
145
0
1
Order By: Relevance
“…(2018) and references therein. Our paper adds to the literature on fair methods for unsupervised learning tasks (Chierichetti et al, 2017;Celis et al, 2017;Samadi et al, 2018;Tantipongpipat et al, 2019;Oneto and Chiappa, 2020;Caton and Haas, 2020;Kleindessner et al, 2019). We discuss the work on fairness most closely related to our paper.…”
Section: Related Workmentioning
confidence: 82%
“…(2018) and references therein. Our paper adds to the literature on fair methods for unsupervised learning tasks (Chierichetti et al, 2017;Celis et al, 2017;Samadi et al, 2018;Tantipongpipat et al, 2019;Oneto and Chiappa, 2020;Caton and Haas, 2020;Kleindessner et al, 2019). We discuss the work on fairness most closely related to our paper.…”
Section: Related Workmentioning
confidence: 82%
“…In this work, we presented a machine learning method to learn predictive checklists from data by solving an integer program. Our method illustrates a promising approach to fit models to obey constraints related to qualities like safety [1] and fairness [3,14]. Using our approach, practitioners can potentially co-design checklists alongside clinicians -by encoding requirements into the model and reporting their effects on predictive performance [17,36].…”
Section: Discussionmentioning
confidence: 99%
“…Aside from fairness definitions, debiasing techniques can be categorized into three main approaches: pre-processing [13], in-processing [39,53] and post-processing [38]. The deep representation learning is a typical in-processing method that incorporates fairness constraints into the model optimization objectives to obtain an ideal model parameterization that maximizes performance and fairness [3]. In particular, representation learning with adversary has become a widely-used method in recent years and has demonstrated effectiveness on multiple tasks, including anonymization [11], clustering [32], classification [39], transfer learning [39], and domain adaptation [30,49].…”
Section: Related Workmentioning
confidence: 99%