2024
DOI: 10.1038/s41591-024-02885-z
|View full text |Cite
|
Sign up to set email alerts
|

Demographic bias in misdiagnosis by computational pathology models

Anurag Vaidya,
Richard J. Chen,
Drew F. K. Williamson
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 136 publications
0
2
0
Order By: Relevance
“…Another important priority is the use of demographically inclusive datasets across key variables such as age, sex, gender, and ethnicity during model training, as imbalanced medical imaging training datasets have been shown to lead to models that perform worse in underrepresented groups [ 64 , 65 ]. For instance, a recent study using large public datasets from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, comprised primarily of tumors from white patients, showed a 16.0% performance gap in the prediction of IDH1 status in gliomas from black vs. white patients [ 65 ].…”
Section: Challenges and Risksmentioning
confidence: 99%
See 1 more Smart Citation
“…Another important priority is the use of demographically inclusive datasets across key variables such as age, sex, gender, and ethnicity during model training, as imbalanced medical imaging training datasets have been shown to lead to models that perform worse in underrepresented groups [ 64 , 65 ]. For instance, a recent study using large public datasets from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, comprised primarily of tumors from white patients, showed a 16.0% performance gap in the prediction of IDH1 status in gliomas from black vs. white patients [ 65 ].…”
Section: Challenges and Risksmentioning
confidence: 99%
“…Another important priority is the use of demographically inclusive datasets across key variables such as age, sex, gender, and ethnicity during model training, as imbalanced medical imaging training datasets have been shown to lead to models that perform worse in underrepresented groups [ 64 , 65 ]. For instance, a recent study using large public datasets from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, comprised primarily of tumors from white patients, showed a 16.0% performance gap in the prediction of IDH1 status in gliomas from black vs. white patients [ 65 ]. In addition to highlighting the need for bias mitigation strategies, including external validation with existing, prospective, and demographic-stratified datasets, the authors demonstrate the utility of novel strategies such as self-supervised vision foundation models to improve model generalizability.…”
Section: Challenges and Risksmentioning
confidence: 99%