“…In parallel ensemble methods (e.g., bagging and random forest [RF]; Breiman, 2001), however, each base learner is built independently, and the base learners are generated in parallel. Moreover, base learners might be of the same type and lead to homogeneous ensembles (e.g., RF), or they may be from different types and lead to heterogeneous ensembles, for example, stacking (Khairalla et al., 2018) and meta‐learning (Ma & Fildes, 2021; Yu & Hua, 2022) with heterogeneous models. Furthermore, we may generate a large number of weak learners (e.g., boosting, bagging, and RF) or a few competitive base learners to combine (e.g., stacking, BMA, and meta‐learning; Leamer & Leamer, 1978).…”