2017
DOI: 10.1016/j.csda.2017.01.004
|View full text |Cite
|
Sign up to set email alerts
|

The robust EM-type algorithms for log-concave mixtures of regression models

Abstract: Finite mixture of regression (FMR) models can be reformulated as incomplete data problems and they can be estimated via the expectation-maximization (EM) algorithm. The main drawback is the strong parametric assumption such as FMR models with normal distributed residuals. The estimation might be biased if the model is misspecified. To relax the parametric assumption about the component error densities, a new method is proposed to estimate the mixture regression parameters by only assuming that the components h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 17 publications
(14 citation statements)
references
References 44 publications
0
14
0
Order By: Relevance
“…By extending the idea of Hu et al (), Hu et al () proposed a robust EM‐type algorithm for mixture regression models by assuming the component error densities are log‐concave. A density g ( x ) is log‐concave if its log‐density, ϕfalse(xfalse)=normalloggfalse(xfalse), is concave.…”
Section: Robust Mixture Regression Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…By extending the idea of Hu et al (), Hu et al () proposed a robust EM‐type algorithm for mixture regression models by assuming the component error densities are log‐concave. A density g ( x ) is log‐concave if its log‐density, ϕfalse(xfalse)=normalloggfalse(xfalse), is concave.…”
Section: Robust Mixture Regression Methodsmentioning
confidence: 99%
“…The likelihood function for the log‐concave mixture of regression model can be presented as false(θ,normalgfalse|normalX,normalyfalse)=truei=1nnormallogtruej=1mπjgjfalse(yiboldxiTβjfalse), where θ =( π 1 , β 1 ,…, π m , β m ) T , and gjfalse(xfalse)=expfalse{ϕjfalse(xfalse)false} for some unknown concave function ϕ j ( x ). Hu et al () proposed an EM algorithm to maximise the objective function . Specifically, in the ( k +1)‐th step, the error density g j in the M‐step can be updated by gjfalse(k+1false)argmaxgjdouble-struckGtruei=1npijfalse(k+1false)normalloggjfalse(yiboldxiTfalseβ^jfalse(k+1false)false),2emj=1,,m, where double-struckG is the family of all log‐concave densities.…”
Section: Robust Mixture Regression Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Ma et al (2018) extended the identifiability result for the model (3.7) by allowing different component error densities and further established the consistency and asymptotic normality of their proposed estimators as well as the estimator proposed by Hunter and Young (2012). Hu et al (2017) assumed the error densities to be log-concave. That is, the model has the same form as (3.7), whereas g c (x) = exp{φ c (x)} for some unknown concave function φ c (x).…”
Section: Nonparametric Errorsmentioning
confidence: 99%