“…The range of all possible algorithms is unknowable because there are emerging algorithms and variations of existing ones, either as architectural modifications (e.g., changes in neural network architecture) or as add-ons (e.g., optimization algorithms; Chen et al, 2020;Yin & Li, 2022;Gharehchopogh et al, 2023). An empirical analysis revealed that algorithms used by various authors include (non-exhaustively): Bayes network (Porwal et al, 2006;Yin & Li, 2022); logistic regression (Agterberg & Bonham-Carter, 1999;Carranza & Hale, 2001;Karbalaei Ramezanali et al, 2020;Lin et al, 2020;Zhang et al, 2022c); support vector machines (Zuo & Carranza, 2011;Zhang et al, 2021;Senanayake et al, 2023); tree-based methods, such as random forest, extra trees and XGBoost (Chen & Wu, 2019;Sun et al, 2019;Zhang et al, 2022a); artificial neural networks, such as extreme learning machines (Chen & Wu, 2017); deep learning methods (Xiong et al, 2018;Wang et al, 2020;Yang et al, 2022;Zuo et al, 2022Li et al, 2023;Yin et al, 2023;Zuo & Xu, 2023); and reinforcement learning (Shi et al, 2023). There are also applicative MPM studies that employed ensemble learning, which is an approach to improve outcome reliability by integrating the output of multiple independent models (e.g., Senanayake et al, 2023;Shetty et al, 2023).…”