2020
DOI: 10.1371/journal.pone.0240376
|View full text |Cite
|
Sign up to set email alerts
|

Artificial Intelligence in mental health and the biases of language based models

Abstract: Background The rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biase… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 77 publications
(60 citation statements)
references
References 50 publications
(78 reference statements)
0
60
0
Order By: Relevance
“…Word vectors encode words into a high dimensional space (50-300 dimensions) that retain semantic meaning and have demonstrated state-of-the-art performance on many language tasks (48)(49)(50). However, the semantic meaning encoded in word vectors is derived from specific corpora (e.g., all of Wikipedia) and in many cases has been found to also retain biases (50)(51)(52). Additionally, models using word vectors may struggle to explain the reason for a specific prediction, which is becoming required for clinical decision support systems (53,54).…”
Section: Discussionmentioning
confidence: 99%
“…Word vectors encode words into a high dimensional space (50-300 dimensions) that retain semantic meaning and have demonstrated state-of-the-art performance on many language tasks (48)(49)(50). However, the semantic meaning encoded in word vectors is derived from specific corpora (e.g., all of Wikipedia) and in many cases has been found to also retain biases (50)(51)(52). Additionally, models using word vectors may struggle to explain the reason for a specific prediction, which is becoming required for clinical decision support systems (53,54).…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning algorithms are entirely dependent on the data used for training, and it is recognised that algorithms derived from homogenous population data might exacerbate racial and other disparities in healthcare 33 . This has been well described in several studies and a literature review of 52 papers using natural language processing (NLP) models in mental health found that no model addressed the possible biases in their development 34 . Another example is ImageNet, which is the most widely used data set for Deep Neural Network applications, but 45% of its data comes from the USA with less than 10% from developing counties 35 , a lack of geodiversity which lends itself to racial and societal bias.…”
Section: Discussionmentioning
confidence: 99%
“…A review of chatbots and conversational agents used in mental health found a small number of academic psychiatric studies with limited heterogeneitythere is a lack of high-quality evidence for diagnosis, treatment or therapy but there is a high potential for effective and agreeable mental health care if correctly and ethically implemented [95]. A major research constraint is that chatbots and predictive algorithms may be biased and perpetuate inequities in the underserved and the unserved [96][97][98][99]. The ethics of a patient-therapist relationship and the limited skills and emotional intelligence of chatbots requires a solution [100].…”
Section: Artificial Intelligencementioning
confidence: 99%