2024
DOI: 10.1007/s41809-024-00151-9
|View full text |Cite
|
Sign up to set email alerts
|

Can large language models be sensitive to culture suicide risk assessment?

Inbar Levkovich,
S. Shinan-Altman,
Zohar Elyoseph

Abstract: Suicide remains a pressing global public health issue. Previous studies have shown the promise of Generative Intelligent (GenAI) Large Language Models (LLMs) in assessing suicide risk in relation to professionals. But the considerations and risk factors that the models use to assess the risk remain as a black box. This study investigates if ChatGPT-3.5 and ChatGPT-4 integrate cultural factors in assessing suicide risks (probability of suicidal ideation, potential for suicide attempt, likelihood of severe suici… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 52 publications
0
0
0
Order By: Relevance