2021
DOI: 10.1016/j.jbi.2020.103621
|View full text |Cite
|
Sign up to set email alerts
|

An empirical characterization of fair machine learning for clinical risk prediction

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
80
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

3
7

Authors

Journals

citations
Cited by 94 publications
(80 citation statements)
references
References 37 publications
0
80
0
Order By: Relevance
“…Model performance was evaluated using area-under-receiver-operating-characteristic curve (AUROC), area-under-precision-recall curve (AUPRC) and the absolute calibration error (ACE). 21 ACE is a calibration measure similar to the integrated calibration index 22 in that it assesses overall model calibration by taking the average of the absolute deviations from an approximated perfect calibration curve. The difference is that ACE uses logistic regression for approximation instead of locally weighted regression such as LOESS.…”
Section: Methodsmentioning
confidence: 99%
“…Model performance was evaluated using area-under-receiver-operating-characteristic curve (AUROC), area-under-precision-recall curve (AUPRC) and the absolute calibration error (ACE). 21 ACE is a calibration measure similar to the integrated calibration index 22 in that it assesses overall model calibration by taking the average of the absolute deviations from an approximated perfect calibration curve. The difference is that ACE uses logistic regression for approximation instead of locally weighted regression such as LOESS.…”
Section: Methodsmentioning
confidence: 99%
“…While existing techniques in SRAs have indeed made significant progress towards responsible AI systems, their usefulness can be limited in some settings where the decisions made are actually poorer for every individual. For issues of fairness in prediction, for example, many findings (e.g., Pfohl, Foryciarz, & Shah, 2020) have shown the concerns about the fairness-performance trade-off: the imposition of fairness comes at a cost to model performance. Predictions are less reliable and moreover, different notions of fairness can make approaches to fairness conflict with one another.…”
Section: Open Problems and Challengesmentioning
confidence: 99%
“…There is also a well-known bias that can emerge against specific groups, whether by race or even socioeconomic status, which can be propagated at scale if ML algorithmics are not trained and “de-biased” properly [ 19 ]. However, it is becoming clear that researchers developing predictive models for clinical use need to transcend traditional conversations about algorithmic bias and think harder about the broader and structural forces that are at play in the observed phenomena [ 20 ].…”
Section: Accurate Reliable and Effectivementioning
confidence: 99%