2022
DOI: 10.3390/info13050237
|View full text |Cite
|
Sign up to set email alerts
|

Bias Discovery in Machine Learning Models for Mental Health

Abstract: Fairness and bias are crucial concepts in artificial intelligence, yet they are relatively ignored in machine learning applications in clinical psychiatry. We computed fairness metrics and present bias mitigation strategies using a model trained on clinical mental health data. We collected structured data related to the admission, diagnosis, and treatment of patients in the psychiatry department of the University Medical Center Utrecht. We trained a machine learning model to predict future administrations of b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…Our clinicians support the idea that the focus on fairness should be on ensuring that individuals at risk of depression are equally identified across groups, so they are fairly provided with the necessary care and support. This objective aligns with the use of equal opportunity criterion, which is also considered by previous studies in ML for mental health 9 , 30 . In the absence of an expert-informed opinion, the equalized odds criterion 31 serves as an alternative fairness objective which offers a stricter standard than the equal opportunity criterion.…”
Section: Methodsmentioning
confidence: 75%
See 1 more Smart Citation
“…Our clinicians support the idea that the focus on fairness should be on ensuring that individuals at risk of depression are equally identified across groups, so they are fairly provided with the necessary care and support. This objective aligns with the use of equal opportunity criterion, which is also considered by previous studies in ML for mental health 9 , 30 . In the absence of an expert-informed opinion, the equalized odds criterion 31 serves as an alternative fairness objective which offers a stricter standard than the equal opportunity criterion.…”
Section: Methodsmentioning
confidence: 75%
“…Only a handful of studies have adopted methods to counteract bias. For instance, reweighing (RW) bias-mitigation technique 33 was used to minimize bias when forecasting future benzodiazepine administrations 30 . Likewise, others applied Suppression (SUP) 34 and RW approaches to reduce bias in the prediction of postpartum depression 9 .…”
Section: Methodsmentioning
confidence: 99%
“…In the generation of clinical cases, ChatGPT-4 failed to create cases that depicted demographic diversity and relied on stereotypes when choosing gender or ethnicity [27]. Thus, the need for "fair AI" has been pointed out with the goal to develop prediction models that provide equivalent outputs for identical individuals who differ only in one sensitive attribute [28]. To avoid or at least reduce potential bias and move towards fair AI, this bias first needs to be conceptualized, measured, and understood [21].…”
Section: Biases and Responsible Aimentioning
confidence: 99%
“…IBM launched AI Fairness 360 [30][31][32], which can help detect and mitigate unwanted bias in machine learning models and datasets. It provides around 70 fairness metrics to test for bias and 11 algorithms to reduce bias in datasets and models, thereby reducing software bias and improving its fairness (e.g., [33]).…”
Section: Status Quomentioning
confidence: 99%