2024
DOI: 10.2196/55988
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values

Dorit Hadar-Shoval,
Kfir Asraf,
Yonathan Mizrachi
et al.

Abstract: Background Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have ethical importance. Schwartz’s theory of basic values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 72 publications
0
5
0
Order By: Relevance
“…Since the release of GAI systems, numerous studies have been conducted regarding their applications in the field of mental health [ 2 7-10 22 34 48-53 undefined undefined undefined undefined undefined undefined undefined undefined ]. However, the current research seeks to examine the entry of this technology from a broader perspective, particularly focusing on its potential impact on psychotherapy.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Since the release of GAI systems, numerous studies have been conducted regarding their applications in the field of mental health [ 2 7-10 22 34 48-53 undefined undefined undefined undefined undefined undefined undefined undefined ]. However, the current research seeks to examine the entry of this technology from a broader perspective, particularly focusing on its potential impact on psychotherapy.…”
Section: Discussionmentioning
confidence: 99%
“…In opposition to the widespread but simplistic view that regards GAI systems as impartial and objective, we contend they are based on certain values and cultures that are shaped by the critical factor of the alignment process [ 34 ]. Understanding the influence of the alignment process is essential for the responsible integration of GAI into psychotherapy.…”
Section: Engaging With the Artificial Third: Three Fundamental Questionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent research has shown that LLMs can accurately identify emotions and mental disorders, such as schizophrenia, depression, and anxiety, and provide treatment recommendations and prognoses comparable to mental health professionals. [15][16][17][18][19][20][21][22][23][24][25][26][27] Despite their potential to democratize clinical knowledge and encourage ideological pluralism, 21,28,29 ethical concerns persist. These include data privacy, algorithmic opacity, threats to patient autonomy, risks of anthropomorphism, technology access disparities, corporate concentration, deep fakes, fake news, reduced reliance on professionals, and amplification of biases.…”
Section: Ai-based Technology In Mental Healthmentioning
confidence: 99%