2022
DOI: 10.48550/arxiv.2202.08176
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bias and unfairness in machine learning models: a systematic literature review

Abstract: One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study aims to examine existing knowledge on bias and unfairness in Machine Learning models, identifying mitigation methods, fairness metrics, and supporting tools. A Systematic Literature Review found 40 eligible articles published between 2017 and 2022 in the Scopus, IEEE Xp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 35 publications
0
1
0
Order By: Relevance
“…Pagano et al categorize bias mitigation techniques into three main categories: preprocessing, in-processing, and post-processing [68]. Pre-processing approaches work towards rebalancing the data.…”
Section: Step-by-step Approach To Algorithm Biasmentioning
confidence: 99%
“…Pagano et al categorize bias mitigation techniques into three main categories: preprocessing, in-processing, and post-processing [68]. Pre-processing approaches work towards rebalancing the data.…”
Section: Step-by-step Approach To Algorithm Biasmentioning
confidence: 99%
“…Although current facial recognition systems achieve high average accuracy, numerous current improvements are directed towards addressing the disproportionate accuracies across different categories, including race [1][2][3][4]. Many attribute the majority of these issues to unbalanced datasets [5][6][7][8].…”
Section: Introductionmentioning
confidence: 99%
“…In other words, the model relies on heuristics specific to the training data but does not generalize well to unseen data. This can be a significant challenge in real-world machine learning applications [5][6][7] . In the realm of histopathology, distinct categories of biases that arise from various sources 8,9 , as summarized in Fig.…”
Section: Introductionmentioning
confidence: 99%