“…Ensemble methods train multiple weak classifiers, such as decision trees, and combine their output (e.g., with majority voting) to achieve a better predictive performance. We observed a frequent use of Random Forests (RF) [19,26,27,34,37,38,43,49,53,54,55,56,57,58], as well as Gradient Boosting Trees (GBT) [37,42], and eXtreme Gradient Boosting (XGBoost) [57,58]. While RF [114] uses bagging (i.e., bootstrap aggregating) and random features subsets to train multiple and diverse trees independently, Gradient Boosting [115] builds one tree at a time, introducing a weak learner to improve shortcoming of existing trees by assigning more weight on instances with wrong predictions and high errors.…”