2022
DOI: 10.48550/arxiv.2206.14397
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fair Machine Learning in Healthcare: A Review

Abstract: Benefiting from the digitization of healthcare data and the development of computing power, machine learning methods are increasingly used in the healthcare domain. Fairness problems have been identified in machine learning for healthcare, resulting in an unfair allocation of limited healthcare resources or excessive health risks for certain groups. Therefore, addressing the fairness problems has recently attracted increasing attention from the healthcare community. However, the intersection of machine learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 61 publications
0
8
0
Order By: Relevance
“…Yet, evidence for fairness in the PI setting is lacking. Closer to PI, fairness research in the healthcare setting is still in its infancy [40]. The digitization of medical data has enabled the scientific community to collect large amounts of heterogeneous, multi-modal data and develop machine learning algorithms for a variety of medical tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Yet, evidence for fairness in the PI setting is lacking. Closer to PI, fairness research in the healthcare setting is still in its infancy [40]. The digitization of medical data has enabled the scientific community to collect large amounts of heterogeneous, multi-modal data and develop machine learning algorithms for a variety of medical tasks.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, an algorithm is unfair if its decisions are skewed toward a particular group of the population without being explained by clinical needs [18,33]. On the basis of the definition of fairness [34], we used 2 metrics to assess fairness-equal opportunity difference (EOD) and disparate impact (DI). Textbox 1 shows the terminology used to define the fairness metrics in this study.…”
Section: Fairness Metricsmentioning
confidence: 99%
“…The suggestions provided, summarized from the literature to date, are not exhaustive. Because fair-aware AI is a relatively new field, standards are still evolving; however, the evidence validating the use of these methods is growing rapidly and has already accumulated an impressive empirical base (e.g., Feng, 2022;Kamishima et al, 2012;Mehrabi et al, 2022;Oneto et al, 2019;Pfohl et al, 2021;Ustun, 2019;Zemel et al, 2013;Zhao et al, 2018). Although a complete review of all possible mitigation methods is outside the scope of this article, we provide a general overview here to guide development and implementation within psychology.…”
Section: Assessing and Mitigating Bias In Aimentioning
confidence: 99%
“…If bias is detected, apply model in-processing and decision postprocessing methods (e.g., Feng, 2022;Kamishima et al, 2012;Mehrabi et al, 2022;Oneto et al, 2019;Pfohl et al, 2021;Ustun, 2019;Zemel et al, 2013;Zhao et al, 2018). Repeat Model Evaluation Steps 1-3 until bias is removed from the model.…”
Section: Bias Mitigationmentioning
confidence: 99%
See 1 more Smart Citation