2010
DOI: 10.1016/j.csda.2010.04.017
|View full text |Cite
|
Sign up to set email alerts
|

An evolutionary algorithm for robust regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Robust estimators for linear regression models including repeated median (RM) (Siegel, 1982), least median of squares (LMS) and the least trimmed squares (LTS) (Rousseeuw, 1984), S-estimate (Rousseeuw and Yohai, 1984), Fast-LTS (Rousseeuw and Driessen, 2006), efficient computation by Flores (2010), and evolutionary algorithm proposed by Nunkesser and Morell (2010) are introduced. All of them have very low efficiency for a regression model under normality assumption.…”
Section: τ −Estimate and Fast −τ −Estimatementioning
confidence: 99%
“…Robust estimators for linear regression models including repeated median (RM) (Siegel, 1982), least median of squares (LMS) and the least trimmed squares (LTS) (Rousseeuw, 1984), S-estimate (Rousseeuw and Yohai, 1984), Fast-LTS (Rousseeuw and Driessen, 2006), efficient computation by Flores (2010), and evolutionary algorithm proposed by Nunkesser and Morell (2010) are introduced. All of them have very low efficiency for a regression model under normality assumption.…”
Section: τ −Estimate and Fast −τ −Estimatementioning
confidence: 99%
“…Since, median is not a continuous function of squared residuals, gradient based optimization techniques are not applicable. Therefore, several algorithms were developed for the LMS in Winker et al (2011), Nunkesser andMorell (2010) and Karr et al (1995) among others.…”
Section: Preliminariesmentioning
confidence: 99%
“…It is the basis of fast heuristic algorithms in high-breakdown regression [50]. Several recent methods based on this formulation, including evolutionary and semidefinite programming, were presented in [3,4,9,36,37,53]. The four methods mentioned above achieve the highest attainable asymptotic breakdown point of 1/2, which means that up to a half of the data can be contaminated without affecting the estimator.…”
Section: Optimizationmentioning
confidence: 99%