2019
DOI: 10.1002/bsl.2392
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning in suicide science: Applications and ethics

Abstract: For decades, our ability to predict suicide has remained at near‐chance levels. Machine learning has recently emerged as a promising tool for advancing suicide science, particularly in the domain of suicide prediction. The present review provides an introduction to machine learning and its potential application to open questions in suicide research. Although only a few studies have implemented machine learning for suicide prediction, results to date indicate considerable improvement in accuracy and positive pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
90
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 112 publications
(90 citation statements)
references
References 28 publications
0
90
0
Order By: Relevance
“…42,53 Future work in the ED setting should focus on incorporating valid and reliable predictive algorithms as an aid to existing clinical decisionmaking practices, while also aligning suicide risk decisions with appropriate and evidence-based clinical interventions to reduce patient suicide risk. [25][26][27]52,54 Overall, results from this study provide important implications for improving ED care and treatment planning for patients reporting active suicidal ideation.…”
Section: Limitationsmentioning
confidence: 78%
“…42,53 Future work in the ED setting should focus on incorporating valid and reliable predictive algorithms as an aid to existing clinical decisionmaking practices, while also aligning suicide risk decisions with appropriate and evidence-based clinical interventions to reduce patient suicide risk. [25][26][27]52,54 Overall, results from this study provide important implications for improving ED care and treatment planning for patients reporting active suicidal ideation.…”
Section: Limitationsmentioning
confidence: 78%
“…43 Consider a machine learning model trained using the electronic health records of medical visits; 44 this model might not be able to accurately predict psychiatric conditions in immigrant populations that avoid interacting with the health-care system. 45 Additionally, clinician bias in the International Classification of Disease's codes or clinical notes can introduce variations in the inputs to a machine learning model that, in turn, bias the model's predictions for minority groups. 46 Further, with less interpretable models, it can be more challenging to detect, track and rectify these different sources of bias.…”
Section: Machine Learning Models: Performance Versus Interpretabilitymentioning
confidence: 99%
“…However, there are challenges with risk identification. Most risk identification strategies are assessment‐based—often requiring intensive training, expertise, and/or time (Linthicum, Schafer & Ribeiro, ). Moreover, these assessments are based on rather simple combinations of risk factors known from suicide research (Linthicum, Schafer & Ribeiro, ).…”
mentioning
confidence: 99%
“…Most risk identification strategies are assessment‐based—often requiring intensive training, expertise, and/or time (Linthicum, Schafer & Ribeiro, ). Moreover, these assessments are based on rather simple combinations of risk factors known from suicide research (Linthicum, Schafer & Ribeiro, ). A recent meta‐analysis has shown that despite over 50 years of suicide research, we have seen little improvement in our predictive accuracy for suicide‐related behaviors (Franklin et al, ).…”
mentioning
confidence: 99%
See 1 more Smart Citation