“…We note here that both GLoME and BLoME models have been thoroughly studied in the statistics and machine learning literatures in many different guises, including localized MoE [86,87,69,15], normalized Gaussian networks [90], MoE modeling of priors in Bayesian nonparametric regression [83,82], cluster-weighted modeling [47], deep mixture of linear inverse regressions [55], hierarchical Gaussian locally linear mapping structured mixture (HGLLiM) model [95], multiple-output Gaussian gated mixture of linear experts [73], and approximate Bayesian computation with surrogate posteriors using GLLiM [39].…”