2006
DOI: 10.1198/106186006x113629
|View full text |Cite
|
Sign up to set email alerts
|

A Fast Algorithm for S-Regression Estimates

Abstract: Equivariant high-breakdown point regression estimates are computationally expensive, and the corresponding algorithms become unfeasible for moderately large number of regressors. One important advance to improve the computational speed of one such estimator is the fast-LTS algorithm. This article proposes an analogous algorithm for computing S-estimates. The new algorithm, that we call "fast-S", is also based on a "local improvement" step of the resampling initial candidates. This allows for a substantial redu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
93
0
2

Year Published

2009
2009
2018
2018

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 186 publications
(96 citation statements)
references
References 28 publications
1
93
0
2
Order By: Relevance
“…Robust also provides a default initial estimate with high breakdown point (not necessarily efficient), which is used in the first three algorithms in our experiments: when p = 15, it is the S-estimate computed via random resampling; when p = 50, it is the estimate from the fast PY procedure. Since it is well known that the initial estimate is outperformed by MM in terms of estimation efficiency and robustness, both theoretically and empirically (Yohai 1987;Salibian-Barrera & Yohai 2006;Peña & Yohai 1999 All methods apart from Θ-IPOD require a cutoff value to identify which residuals are outliers. We applied the fully efficient procedure which performas at least as well as the fixed choice of η = 2.5 in various situations (Gervini & Yohai 2002).…”
Section: Parameter Tuning In Outlier Detectionmentioning
confidence: 99%
“…Robust also provides a default initial estimate with high breakdown point (not necessarily efficient), which is used in the first three algorithms in our experiments: when p = 15, it is the S-estimate computed via random resampling; when p = 50, it is the estimate from the fast PY procedure. Since it is well known that the initial estimate is outperformed by MM in terms of estimation efficiency and robustness, both theoretically and empirically (Yohai 1987;Salibian-Barrera & Yohai 2006;Peña & Yohai 1999 All methods apart from Θ-IPOD require a cutoff value to identify which residuals are outliers. We applied the fully efficient procedure which performas at least as well as the fixed choice of η = 2.5 in various situations (Gervini & Yohai 2002).…”
Section: Parameter Tuning In Outlier Detectionmentioning
confidence: 99%
“…The first such estimator was proposed by Stahel (1981a,b) and Donoho (1982) and it is recommended for small data sets, but the most widely used high breakdown estimator is the minimum covariance determinant estimate (Rousseeuw 1985). Several algorithms for computing the S estimators (Davies 1987) are provided (Ruppert 1992;Woodruff and Rocke 1994;Rocke 1996;Salibian-Barrera and Yohai 2006). The minimum volume ellipsoid (MVE) estimator (Rousseeuw 1985) is also included since it has some desirable properties when used as initial estimator for computing the S estimates (see Maronna et al 2006, p. 198).…”
Section: R> Library("rrcov")mentioning
confidence: 99%
“…It uses an S-estimator (Rousseeuw and Yohai, 1984) for the errors which is also computed with a bi-square score function. The S-estimator is computed using the Fast-S algorithm of Salibian-Barrera and Yohai (2006). Standard errors are computed using the formulas of Croux, Dhaene and Hoorelbeke (2003).…”
Section: Countrymentioning
confidence: 99%