2016
DOI: 10.1016/j.neunet.2016.03.002
|View full text |Cite
|
Sign up to set email alerts
|

Robust mixture of experts modeling using thetdistribution

Abstract: Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in data for regression, classification, and clustering. For regression and cluster analyses of continuous data, MoE usually use normal experts following the Gaussian distribution. However, for a set of data containing a group or groups of observations with heavy tails or atypical observations, the use of normal experts is unsuitable and can unduly affect the fit of the MoE model. We introduce a robust MoE modeling using the t distributi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 28 publications
(3 citation statements)
references
References 42 publications
(96 reference statements)
0
3
0
Order By: Relevance
“…However, this criterion can also be employed for model selection. In [36], there is a short discussion on different modeling selections for MoE.…”
Section: Model Selection and Stop Conditionmentioning
confidence: 99%
“…However, this criterion can also be employed for model selection. In [36], there is a short discussion on different modeling selections for MoE.…”
Section: Model Selection and Stop Conditionmentioning
confidence: 99%
“…Wang et al, 2020), economics (Mazza et al, 2017;Wang et al, 1998) and social sciences (Gormley & Murphy, 2010). There has been a growing interest recently in developing the MoE of univariate regression models based on non-normal distributions as efficient tools for handling non-linear regression problems and model-based clustering of skewed and heavy-tailed distributed regression data; see, for example, Nguyen and McLachlan (2016), Chamroukhi (2016), Chamroukhi (2017) and Mirfarah et al (2021). For the multivariate response, Dang and McNicholas (2015) have proposed MoE of classical MTR models (hereafter called MoE-MTR N ) by modelling the mixing proportions as a function of some covariates.…”
Section: Introductionmentioning
confidence: 99%
“…Since their beginning more than three decades ago, they have been the subject of numerous research works across various domains such as statistics (Van der Heijden et al, 1996; Wedel, 2002), market segmentation (Dillon & Kutnar, 1994; Gupta & Chintagunta, 1994; Kamakura et al, 1994; Wedel & Kamakura, 2012), computer vision (Theis & Bethge, 2015; X. Wang et al, 2020), economics (Mazza et al, 2017; Wang et al, 1998) and social sciences (Gormley & Murphy, 2010). There has been a growing interest recently in developing the MoE of univariate regression models based on non‐normal distributions as efficient tools for handling non‐linear regression problems and model‐based clustering of skewed and heavy‐tailed distributed regression data; see, for example, Nguyen and McLachlan (2016), Chamroukhi (2016), Chamroukhi (2017) and Mirfarah et al (2021). For the multivariate response, Dang and McNicholas (2015) have proposed MoE of classical MTR models (hereafter called MoE‐MTRnormalN) by modelling the mixing proportions as a function of some covariates.…”
Section: Introductionmentioning
confidence: 99%