Objective
Suicide is a complex and multifactorial public health problem. Understanding and addressing the various factors associated with suicide is crucial for prevention and intervention efforts. Machine learning (ML) could enhance the prediction of suicide attempts.
Method
A systematic review was performed using PubMed, Scopus, Web of Science and SID databases. We aim to evaluate the performance of ML algorithms and summarize their effects, gather relevant and reliable information to synthesize existing evidence, identify knowledge gaps, and provide a comprehensive list of the suicide risk factors using mixed method approach.
Results
Forty-one studies published between 2011 and 2022, which matched inclusion criteria, were chosen as suitable. We included studies aimed at predicting the suicide risk by machine learning algorithms except natural language processing (NLP) and image processing.
The neural network (NN) algorithm exhibited the lowest accuracy at 0.70, whereas the random forest demonstrated the highest accuracy, reaching 0.94. The study assessed the COX and random forest models and observed a minimum area under the curve (AUC) value of 0.54. In contrast, the XGBoost classifier yielded the highest AUC value, reaching 0.97. These specific AUC values emphasize the algorithm-specific performance in capturing the trade-off between sensitivity and specificity for suicide risk prediction.
Furthermore, our investigation identified several common suicide risk factors, including age, gender, substance abuse, depression, anxiety, alcohol consumption, marital status, income, education, and occupation. This comprehensive analysis contributes valuable insights into the multifaceted nature of suicide risk, providing a foundation for targeted preventive strategies and intervention efforts.
Conclusions
The effectiveness of ML algorithms and their application in predicting suicide risk has been controversial. There is a need for more studies on these algorithms in clinical settings, and the related ethical concerns require further clarification.