2022
DOI: 10.1016/j.ebiom.2022.104250
|View full text |Cite
|
Sign up to set email alerts
|

Algorithmic fairness in computational medicine

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
61
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 74 publications
(61 citation statements)
references
References 76 publications
0
61
0
Order By: Relevance
“…But more importantly, these results may have profound implications in the context discussed earlier, that there is a real risk that ML models may amplify health disparities. 9 , 10 , 11 Since it seems straightforward (given the very high accuracy) to extract features related to racial information from medical scans, any spurious correlations between race and clinical outcome present in the data could be picked up by a model that is trained for clinical diagnosis. Assuming that features predictive of race are easier to extract than features associated with pathology, there are concerns that the model may learn ‘shortcuts’ that could manifest an undesirable association in the model between the patient's race and the prediction of disease.…”
Section: Introductionmentioning
confidence: 99%
“…But more importantly, these results may have profound implications in the context discussed earlier, that there is a real risk that ML models may amplify health disparities. 9 , 10 , 11 Since it seems straightforward (given the very high accuracy) to extract features related to racial information from medical scans, any spurious correlations between race and clinical outcome present in the data could be picked up by a model that is trained for clinical diagnosis. Assuming that features predictive of race are easier to extract than features associated with pathology, there are concerns that the model may learn ‘shortcuts’ that could manifest an undesirable association in the model between the patient's race and the prediction of disease.…”
Section: Introductionmentioning
confidence: 99%
“…On a different note, Abdul et al [1] featured emerging trends for explainable, accountable, and intelligible systems within the CHI community, also discussing notions of fairness. Closer to our work, Mhasawade et al [94] discussed ML fairness in the domain of public and population health, and Xu et al [144] explored algorithmic fairness in computational medicine, which only covers a subset of the broad, interdisciplinary UbiComp research domains.…”
Section: Related Workmentioning
confidence: 96%
“…In the context of current issues in healthcare, Chen et al 16 summarized the fairness in machine learning and its intersectional eld, outlining how algorithmic biases arise in existing clinical work ows and the healthcare disparities that result from these issues. Although research 1,16,17,18 shows that AI algorithms can be biased against speci c populations or groups in various situations, there is a gap in understanding fairness in audio sentiment analysis.…”
Section: Introductionmentioning
confidence: 99%