2021
DOI: 10.3389/fphys.2021.778720
|View full text |Cite
|
Sign up to set email alerts
|

Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19

Abstract: Artificial intelligence (AI) technologies have been applied in various medical domains to predict patient outcomes with high accuracy. As AI becomes more widely adopted, the problem of model bias is increasingly apparent. In this study, we investigate the model bias that can occur when training a model using datasets for only one particular gender and aim to present new insights into the bias issue. For the investigation, we considered an AI model that predicts severity at an early stage based on the medical r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
5
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Despite efforts to address gender biases, one study spanning two decades in academic medical centers showed that little progress has been made in transforming predominantly male-dominated fields, and women physicians continue to face significant challenges in these domains [ 21 , 22 ]. One study explored the potential model bias that emerges when training AI models exclusively on one gender dataset for predicting early-stage severity based on COVID-19 patient medical records [ 23 ]. They found that gender-dependent AI models exhibited lower accuracy compared to unbiased models, underscoring the importance of training models on unbiased data.…”
Section: Discussionmentioning
confidence: 99%
“…Despite efforts to address gender biases, one study spanning two decades in academic medical centers showed that little progress has been made in transforming predominantly male-dominated fields, and women physicians continue to face significant challenges in these domains [ 21 , 22 ]. One study explored the potential model bias that emerges when training AI models exclusively on one gender dataset for predicting early-stage severity based on COVID-19 patient medical records [ 23 ]. They found that gender-dependent AI models exhibited lower accuracy compared to unbiased models, underscoring the importance of training models on unbiased data.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, our developed model may be biased toward the gestational week range of 26 to 32 weeks. Indeed, our recent study showed that data skewed toward 1 gender can also lead to model bias [ 48 ]. Thus, in the future work, we need to update our model based on a more balanced gestational week data.…”
Section: Discussionmentioning
confidence: 99%
“…Algorithmic bias can introduce inequity in clinical decision making and healthcare research 1,12,16 . To prevent bias and produce reliable results, the data that are the foundation for the algorithms need careful curation especially in women's health 17,20 . This implies data readiness for AI is not a one size fits all solution 2,3 .…”
Section: Discussionmentioning
confidence: 99%