“…In stage 3, a variety of ML-based and DL-based approaches were used, including support vector machine (SVM) [52][53][54][55]89], extreme gradient boosting (XGB) [59,90], gradient-boosting decision trees (GB) [91], random forest (RF) [58,[92][93][94], AdaBoost, extremely randomized tree (ERT) [72], deep forest [95], light gradient boosting machine (LGB) [73], naive Bayesian (NB) [73], k-nearest neighbor (KNN) [73], multilayer perceptron (MLP) [72], DNN [66,96], CNN [63,67,97], residual CNN (RCNN) [98], RNN [99,100], gated recurrent unit (GRU)-based bidirectional RNN (BRNN) [101], deep belief network (DBN), Bi-LSTM [70,86], feed-forward attention [70], residual network (ResNet), and RBFN [102]. In some methods, the predicted values of different classifiers are incorporated and input into specific classifiers to develop the final prediction model.…”