2020
DOI: 10.1051/ps/2020014
|View full text |Cite
|
Sign up to set email alerts
|

Inference robust to outliers with 1-norm penalization

Abstract: This paper considers the problem of inference in a linear regression model with outliers where the number of outliers can grow with sample size but their proportion goes to 0. We apply the square-root lasso estimator penalizing the ℓ 1 -norm of a random vector which is non-zero for outliers. We derive rates of convergence and asymptotic normality. Our estimator has the same asymptotic variance as the OLS estimator in the standard linear model. This enables to build tests and confidence sets in the usual and si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
2

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 35 publications
0
8
0
Order By: Relevance
“…Censoring poses additional challenges because T i is not observed for some i. It is well known that 1 -norm penalization allows to build robust estimators ( [21,5]). In [24], Stute proposes a simple estimator for censored regression.…”
Section: Main Estimatormentioning
confidence: 99%
See 2 more Smart Citations
“…Censoring poses additional challenges because T i is not observed for some i. It is well known that 1 -norm penalization allows to build robust estimators ( [21,5]). In [24], Stute proposes a simple estimator for censored regression.…”
Section: Main Estimatormentioning
confidence: 99%
“…for any b ∈ R p and, therefore, β = β( α) when there is a unique solution to the minimization program (5). Hence, when (X w ) X w is positive definite, we have…”
Section: Main Estimatormentioning
confidence: 99%
See 1 more Smart Citation
“…[9] elaborates on how to choose λ k β according to (i). Lemma 2.3 and Corollary 2.4. in [11] provide guidance on how to pick λ γ k under this constraint. Condition (ii) states that the extended restricted eigenvalue is bounded from below with probability approaching 1.…”
Section: Rate Of Convergence Of the First Step Estimatormentioning
confidence: 99%
“…These works do not provide inference results. In this setup, [11] shows that a variant of the square-root lasso estimator is asymptotically normal and efficient. The present paper can be seen as an extension of this result in a high-dimensional context.…”
Section: Introductionmentioning
confidence: 99%