2022
DOI: 10.1038/s41598-022-19828-8
|View full text |Cite
|
Sign up to set email alerts
|

A machine learning approach for predicting suicidal ideation in post stroke patients

Abstract: Currently, the identification of stroke patients with an increased suicide risk is mainly based on self‐report questionnaires, and this method suffers from a lack of objectivity. This study developed and validated a suicide ideation (SI) prediction model using clinical data and identified SI predictors. Significant variables were selected through traditional statistical analysis based on retrospective data of 385 stroke patients; the data were collected from October 2012 to March 2014. The data were then appli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…It enhances the classification tree by considering a random subspace of predictors when building a tree and by creating a diverse set of trees that contribute to classification performance [ 50 ]. The XGBoost method is a variant of the gradient boosting algorithm, which minimizes errors by applying the gradient descent method in a Boosting algorithm combining several weak learners [ 51 ]. MLP is a neural network classifier consisting of feedforward networks with dense, all-to-all connections between layers.…”
Section: Methodsmentioning
confidence: 99%
“…It enhances the classification tree by considering a random subspace of predictors when building a tree and by creating a diverse set of trees that contribute to classification performance [ 50 ]. The XGBoost method is a variant of the gradient boosting algorithm, which minimizes errors by applying the gradient descent method in a Boosting algorithm combining several weak learners [ 51 ]. MLP is a neural network classifier consisting of feedforward networks with dense, all-to-all connections between layers.…”
Section: Methodsmentioning
confidence: 99%
“…This provides a brief (within a 10-minute interaction) way to document essential psychological constructs in stroke. For instance, acceptance has been identified as negatively associated with depression and anxiety [13] and low acceptance can be identified as a risk for developing mental health disorder [14]. As a result of this relationship, acceptance should be examined early during rehabilitation for people with stroke [15].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, the notion of explainable artificial intelligence is enjoying immense popularity now. Explainable artificial intelligence can be defined as “artificial intelligence to identify major predictors of the dependent variable”, and there are four approaches of explainable artificial intelligence at this point, i.e., random forest impurity importance, random forest permutation importance [ 20 , 21 ], machine learning accuracy importance, and Shapley additive explanations (SHAP) [ 15 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 ]. Random forest impurity importance calculates the node impurity decrease from the creation of a branch on a certain predictor.…”
Section: Introductionmentioning
confidence: 99%
“…Machine learning accuracy importance (an extension of random forest permutation importance) calculates the accuracy decrease from the exclusion of data on the predictor. The SHAP value of a predictor for a participant measures the difference between what machine learning predicts for the probability of GID with and without the predictor [ 15 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 ]. For example, let us assume in a hypothetical figure ( Figure 1 ) that the SHAP values of diabetes (x033) for GERD have the range of (−0.05, 0.30).…”
Section: Introductionmentioning
confidence: 99%