2021
DOI: 10.1145/3457607
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Bias and Fairness in Machine Learning

Abstract: With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
1,191
0
19

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 2,409 publications
(1,522 citation statements)
references
References 87 publications
1
1,191
0
19
Order By: Relevance
“…We also have not set out to provide an exhaustive review of the computational techniques in the AI/ML research to address ethical issues like fairness and explainability, for this, see for example [102,103]. The system is tested Deployment…”
Section: Document Analysismentioning
confidence: 99%
“…We also have not set out to provide an exhaustive review of the computational techniques in the AI/ML research to address ethical issues like fairness and explainability, for this, see for example [102,103]. The system is tested Deployment…”
Section: Document Analysismentioning
confidence: 99%
“…Trained on historical data, ML algorithms may infer (multiple) proxies for legally protected or otherwise sensitive attributes (e.g., 'race', gender, or socio-economic status), consequently introducing disparities into algorithmic predictions (Baker & Hawn [2021]; Mehrabi et al [2021]). Due to unrepresentative sampling and other technical design flaws, bias against specific groups in outcomes may occur without explicit use of protected or sensitive attributes as model features.…”
Section: Algorithmic Fairness For Accountability?mentioning
confidence: 99%
“…Secondly, by indicating deviations from the ideal (i.e., unwanted bias), they inform decision-makers about the need for interventions. Existing methods can mitigate identified biases by transforming the composition of the data used to train the algorithm, by adjusting the training process or the algorithm, or by balancing the output distribution (see Mehrabi et al [2021]). Open-source toolkits (e.g., Aequitas 2 and AI Fairness 360 3 ) enable developers of ML systems to evaluate their pre-trained models against a variety of fairness definitions, aiding in efforts to mitigate bias.…”
Section: Algorithmic Fairness For Accountability?mentioning
confidence: 99%
“…This realisation is critical to consider in applied settings. As with many big data problems, such as training facial recognition technology, developing predictive models based on language data can be problematic when the training set is not representative (e.g., Amorim et al, 2018;Mehrabi et al, 2021). For instance, when NLP is used in the context of automated essay scoring it can lead to bias if not trained on a dataset that is representative of the entire community being assessed.…”
Section: A Cautionary Notementioning
confidence: 99%