2018
DOI: 10.48550/arxiv.1810.08810
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Frontiers of Fairness in Machine Learning

Alexandra Chouldechova,
Aaron Roth

Abstract: The last few years have seen an explosion of academic and popular interest in algorithmic fairness. Despite this interest and the volume and velocity of work that has been produced recently, the fundamental science of fairness in machine learning is still in a nascent state. In March 2018, we convened a group of experts as part of a CCC visioning workshop to assess the state of the field, and distill the most promising research directions going forward. This report summarizes the findings of that workshop. Alo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
121
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(121 citation statements)
references
References 13 publications
0
121
0
Order By: Relevance
“…Biases can enter an AI system at any point in its cycle. Empirical findings have shown that data-driven methods can unintentionally encode existing human biases and introduce new ones [Chouldechova and Roth, 2018]. According to [Ferrer et al, 2021], three causes of biases are:…”
Section: Causes Of Bias In Ai Systemsmentioning
confidence: 99%
“…Biases can enter an AI system at any point in its cycle. Empirical findings have shown that data-driven methods can unintentionally encode existing human biases and introduce new ones [Chouldechova and Roth, 2018]. According to [Ferrer et al, 2021], three causes of biases are:…”
Section: Causes Of Bias In Ai Systemsmentioning
confidence: 99%
“…There is a growing body of work on fairness in machine learning. Much of the research is on fair supervised methods; see, Chouldechova and Roth (2018); Barocas et al (2019); Donini et al…”
Section: Related Workmentioning
confidence: 99%
“…Bias and fairness issues are crucial as machine learning systems are being increasingly used in sensitive applications (Chouldechova and Roth, 2018). Bias is caused due to pre-existing societal norms (Friedman and Nissenbaum, 1996), data source, data labeling, training algorithms, and postprocessing models.…”
Section: Related Workmentioning
confidence: 99%
“…For example, if we train a model on data that contain labels from two populations -a majority and a minority population -minimizing overall error will fit only the majority population ignoring the minority (Chouldechova and Roth, 2018). Data labeling bias exists when the distribution of the dependent variable in the data source diverges from the ideal distribution (Shah et al, 2019).…”
Section: Related Workmentioning
confidence: 99%