2021
DOI: 10.5705/ss.202019.0045
|View full text |Cite
|
Sign up to set email alerts
|

A New Principle for Tuning-Free Huber Regression

Abstract: The robustification parameter, which balances bias and robustness, has played a critical role in the construction of sub-Gaussian estimators for heavy-tailed and/or skewed data. Although it can be tuned by cross-validation in traditional practice, in large scale statistical problems such as high dimensional covariance matrix estimation and large scale multiple testing, the number of robustification parameters scales with the dimensionality so that cross-validation can be computationally prohibitive. In this pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 28 publications
(40 citation statements)
references
References 37 publications
0
40
0
Order By: Relevance
“…, p with high probability even when p grows exponentially fast with n. Here, the divergence of τ j guarantees p θ j to be sub-Gaussian even the error only admits p2 δqth finite moment, and more importantly, the order of τ j grants the desired approximation error of Bahadur representation to p θ j (Proposition 1) as well as the uniform non-asymptotic bounds of the estimated covariance of p θ j (Theorem 2). As noticed in the literature (Catoni, 2012;Fan et al, 2019;Sun et al, 2020;Wang et al, 2021), the divergent τ j is necessary to balance the bias and robustness in the presence of heavy-tailed and/or skewed errors. On the other hand, the order of τ j in our setting is different from the earlier studies on the adaptive Huber regressions.…”
Section: Model and Methodologymentioning
confidence: 99%
See 3 more Smart Citations
“…, p with high probability even when p grows exponentially fast with n. Here, the divergence of τ j guarantees p θ j to be sub-Gaussian even the error only admits p2 δqth finite moment, and more importantly, the order of τ j grants the desired approximation error of Bahadur representation to p θ j (Proposition 1) as well as the uniform non-asymptotic bounds of the estimated covariance of p θ j (Theorem 2). As noticed in the literature (Catoni, 2012;Fan et al, 2019;Sun et al, 2020;Wang et al, 2021), the divergent τ j is necessary to balance the bias and robustness in the presence of heavy-tailed and/or skewed errors. On the other hand, the order of τ j in our setting is different from the earlier studies on the adaptive Huber regressions.…”
Section: Model and Methodologymentioning
confidence: 99%
“…For example, with the finite p1 qth moment of error, Sun et al (2020) focused on estimating the adaptive Huber regression that corresponds to p 1 in our setting and considered τ j Opn maxt1{p1 q,1{2u pd log nq ¡ maxt1{p1 q,1{2u q, while Fan et al ( 2019) used τ j Opn 1{2 tlogpnpqu ¡1{2 q for testing p-dimensional mean vectors under the assumption of finite fourth moment of errors, which corresponds to d 1 in our setting. In practice, τ j can be chosen by either the 2.1 Test procedure for general linear hypotheses10 cross-validation or the recent data-driven method by Wang et al (2021).…”
Section: Model and Methodologymentioning
confidence: 99%
See 2 more Smart Citations
“…Before proceeding it is worth mentioning that although all tuning constants were set using the standard normal distribution as a benchmark, the location‐scale regression model in () only requires the error term to have zero mean and unit variance. However, if the error distribution is asymmetric, the choice of bD=1.345 induces bias at the intercept estimate and, consequently, the corresponding prediction of the conditional mean is also biased 16,17 . Increasing the tuning constant bD reduces the bias but this parameter cannot be increased too much in order to maintain the robustness.…”
Section: Robust and Flexible Inference For The Covariate‐specific Roc Curvementioning
confidence: 99%