“…, 2012). These methods encompass: decision trees , using a top-down divide-and-conquer approach that selects attributes and partitions datasets to create subsets reflecting homogeneity, assessed via entropy (C4.5 algorithm) or Gini criterion (CART algorithm); decision rules , offering rules based on a single attribute (1R algorithm), employing top-down separate-and-conquer approaches (e.g., RIPPER algorithm) or a blend of divide-and-conquer and separate-and-conquer strategies (e.g., PART algorithm); ensemble methods that utilise various learners (e.g., random forest, employing base learner and bagging and boosting through decision trees) weighted to derive a final prediction; nearest neighbours , classifying attributes based on Euclidean distance (e.g., lazy IBk algorithm); Support Vector Machines , crafting hyperplanes to maximise margins between classes and employing kernel functions to establish decision boundaries; statistical classifiers such as logistic regression for probability estimation, Naïve-Bayes estimating class-conditional probabilities by assuming attribute independence, or Bayesian networks specifying attribute independence; and neural networks based on the perceptron convergence theorem, constructing networks of neurons and determining class fitting, functions and associated weights aligned with training data (Chiu and Xu, 2023; Hastie et al. , 2009; Witten and Frank, 2005).…”