2021
DOI: 10.1016/j.ins.2021.05.055
|View full text |Cite
|
Sign up to set email alerts
|

Approximating XGBoost with an interpretable decision tree

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
74
0
5

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 236 publications
(79 citation statements)
references
References 22 publications
0
74
0
5
Order By: Relevance
“…These capabilities make DT widely employed for different applications that require a comprehension of both the model construction and its prediction. Although decision tree-based prediction models are highly interpretable, these intelligent decision-making models have limited prediction performance due to the nearsightedness characteristic of their induction models [ 89 , 90 ]. When complex interactions exist among input features, DT models usually fail to capture these, leading to essential biases.…”
Section: Proposed Xai-based Model For Covid-19 Diagnosismentioning
confidence: 99%
“…These capabilities make DT widely employed for different applications that require a comprehension of both the model construction and its prediction. Although decision tree-based prediction models are highly interpretable, these intelligent decision-making models have limited prediction performance due to the nearsightedness characteristic of their induction models [ 89 , 90 ]. When complex interactions exist among input features, DT models usually fail to capture these, leading to essential biases.…”
Section: Proposed Xai-based Model For Covid-19 Diagnosismentioning
confidence: 99%
“…The results were presented in the Supplementary Table 2 ( Supplementary File 1 ). Finally, we set the sampling time-window to 24 h and the sliding time-step to 8 h. Then, we developed several state-of-the-art models that are widely used as follows: (1) Classic machine learning models: Logistic regression (LR) ( 24 ) and support vector machines (SVM) ( 25 ), which are the most commonly used algorithms in existing research; (2) Enhanced machine learning models: gradient boosting machine (LightGBM) ( 26 ) and XGBoost ( 27 ), which are widely regarded as the best algorithm for data prediction and are adopted by many competition winning models in the field of machine learning; (3) Classic deep learning models: RNN ( 21 ) and long short-term memory network (LSTM) ( 28 ), which are the most commonly chosen deep learning models in time-series data, which have shown excellent performance in several time series studies; (4) Improved deep learning models: RNN-Decay and ODE-RNN. The detailed method was described in Supplementary File 1 .…”
Section: Methodsmentioning
confidence: 99%
“…The results were presented in the Supplementary Table 2 (Supplementary File 1). Finally, we set the sampling timewindow to 24 h and the sliding time-step to 8 h. Then, we developed several state-of-the-art models that are widely used as follows: (1) Classic machine learning models: Logistic regression (LR) (24) and support vector machines (SVM) (25), which are the most commonly used algorithms in existing research; (2) Enhanced machine learning models: gradient boosting machine (LightGBM) (26) and XGBoost (27), which are widely regarded as the best algorithm for data prediction and are adopted by many competition winning models in the field of machine learning;…”
Section: Model Developmentmentioning
confidence: 99%
“…The Gradient-Boosting Decision Tree (GBDT) is a machine learning technique for regression, classification, and other tasks, using a decision tree flowchart approach combined with the boosting ensemble technique. The GBDT improves the capacity of the decision tree by reducing the residuals generated during the training procedure [22,23]. It has been widely applied in social science research [24][25][26][27][28] and gradually introduced into the field of natural science [1][2][3][4][5][6][7][29][30][31][32][33][34][35].…”
Section: Introductionmentioning
confidence: 99%