2020
DOI: 10.3150/19-bej1149
|View full text |Cite
|
Sign up to set email alerts
|

Robust modifications of U-statistics and applications to covariance estimation problems

Abstract: Let Y be a d-dimensional random vector with unknown mean µ and covariance matrix Σ. This paper is motivated by the problem of designing an estimator of Σ that admits tight deviation bounds in the operator norm under minimal assumptions on the underlying distribution, such as existence of only 4th moments of the coordinates of Y . To address this problem, we propose robust modifications of the operator-valued U-statistics, obtain non-asymptotic guarantees for their performance, and demonstrate the implications … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 42 publications
0
15
0
Order By: Relevance
“…And that is also the case for covariance estimation. The current state-ofthe-art for covariance estimation in heavy-tailed situation is [13] (see Corollary 4.1 there and similar results in [11,12]), in which X is assumed to satisfy an L 4 − L 2 norm equivalence. Definition 1.5.…”
Section: Introductionmentioning
confidence: 95%
See 1 more Smart Citation
“…And that is also the case for covariance estimation. The current state-ofthe-art for covariance estimation in heavy-tailed situation is [13] (see Corollary 4.1 there and similar results in [11,12]), in which X is assumed to satisfy an L 4 − L 2 norm equivalence. Definition 1.5.…”
Section: Introductionmentioning
confidence: 95%
“…The question of estimating the covariance of a random vector has been studied extensively in recent years (see, e.g., [2,5,11,12,13] and references therein). To formulate the problem, let X be a zero mean random vector taking its values in R d and denote the covariance matrix by Σ = E(X ⊗ X).…”
Section: Introductionmentioning
confidence: 99%
“…Minsker (2018) [23] generalized Catoni's idea to the multivariate self-adjoint random matrix case. Furthermore, Minsker and Wei (2017) [24], Ke et al(2019) [15], Minsker and Wei (2020) [25] and Fan et al (2021) [10] respectively constructed different robust covariance estimators when the samples have bounded fourth moment. Avella-Medina et al (2018) [1] applied Huber (1964) [14]'s loss to construct robust covariance and precision matrix estimators without finite kurtosis of the samples.…”
Section: Introductionmentioning
confidence: 99%
“…We note that U-statistics with values in the set of self-adjoint matrices have been considered in [6], however, most results in that work deal with the element-wise sup-norm, while we are primarily interested in results about the moments and tail behavior of the spectral norm of Ustatistics. Another recent work [26] investigates robust estimators of covariance matrices based on U-statistics, but deals only with the case of non-degenerate U-statitistics that can be reduced to the study of independent sums.…”
Section: Introductionmentioning
confidence: 99%
“…As a corollary of our bounds, we deduce a variant of the Matrix Bernstein inequality for U-statistics of order 2.We also discuss connections of our bounds with general moment inequalities for Banach spacevalued U-statistics due to R. Adamczak [1], and leverage Adamczak's inequalities to obtain additional refinements and improvements of the results.We note that U-statistics with values in the set of self-adjoint matrices have been considered in [6], however, most results in that work deal with the element-wise sup-norm, while we are primarily interested in results about the moments and tail behavior of the spectral norm of Ustatistics. Another recent work [26] investigates robust estimators of covariance matrices based on U-statistics, but deals only with the case of non-degenerate U-statitistics that can be reduced to the study of independent sums.The key technical tool used in our arguments is the extension of the non-commutative Khintchine's inequality (Lemma 3.3) which could be of independent interest.…”
mentioning
confidence: 99%