2022
DOI: 10.1002/sta4.421
|View full text |Cite
|
Sign up to set email alerts
|

Parsimonious mixture‐of‐experts based on mean mixture of multivariate normal distributions

Abstract: The mixture‐of‐experts (MoE) paradigm attempts to learn complex models by combining several “experts” via probabilistic mixture models. Each expert in the MoE model handles a small area of the data space in which a gating function controls the data‐to‐expert assignment. The MoE framework has been used extensively in designing non‐linear models in machine learning and statistics to model the heterogeneity in data for the purpose of regression, classification and clustering. The existing MoE of multi‐target regr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 64 publications
(95 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?