Background
Prediction tools for various intraoperative bleeding events remain scarce. We aim to develop machine learning-based models and identify the most important predictors by real-world data from electronic medical records (EMRs).
Methods
An established database of surgical inpatients in Shanghai was utilized for analysis. A total of 51,173 inpatients were assessed for eligibility. 48,543 inpatients were obtained in the dataset and patients were divided into haemorrhage (N = 9728) and without-haemorrhage (N = 38,815) groups according to their bleeding during the procedure. Candidate predictors were selected from 27 variables, including sex (N = 48,543), age (N = 48,543), BMI (N = 48,543), renal disease (N = 26), heart disease (N = 1309), hypertension (N = 9579), diabetes (N = 4165), coagulopathy (N = 47), and other features. The models were constructed by 7 machine learning algorithms, i.e., light gradient boosting (LGB), extreme gradient boosting (XGB), cathepsin B (CatB), Ada-boosting of decision tree (AdaB), logistic regression (LR), long short-term memory (LSTM), and multilayer perception (MLP). An area under the receiver operating characteristic curve (AUC) was used to evaluate the model performance.
Results
The mean age of the inpatients was 53 ± 17 years, and 57.5% were male. LGB showed the best predictive performance for intraoperative bleeding combining multiple indicators (AUC = 0.933, sensitivity = 0.87, specificity = 0.85, accuracy = 0.87) compared with XGB, CatB, AdaB, LR, MLP and LSTM. The three most important predictors identified by LGB were operative time, D-dimer (DD), and age.
Conclusions
We proposed LGB as the best Gradient Boosting Decision Tree (GBDT) algorithm for the evaluation of intraoperative bleeding. It is considered a simple and useful tool for predicting intraoperative bleeding in clinical settings. Operative time, DD, and age should receive attention.