2021
DOI: 10.1214/20-aos1980
|View full text |Cite
|
Sign up to set email alerts
|

A shrinkage principle for heavy-tailed data: High-dimensional robust low-rank matrix recovery

Abstract: This paper introduces a simple principle for robust statistical inference via appropriate shrinkage on the data. This widens the scope of high-dimensional techniques, reducing the distributional conditions from sub-exponential or sub-Gaussian to more relaxed bounded second or fourth moment. As an illustration of this principle, we focus on robust estimation of the low-rank matrix Θ* from the trace regression model Y = Tr(Θ* ⊤ X) + ϵ. It encompasses four popular problems: sparse linear model, compressed sensing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
74
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 60 publications
(75 citation statements)
references
References 55 publications
1
74
0
Order By: Relevance
“…This framework includes estimating θ = µ T Σ −1 µ using different inputs of estimators for µ and Σ such as robustified estimators (Fan, Wang, Zhu, 2016;Fan, Wang, Zhong and Zhu, 2018;Ke, Minster, Ren, Sun, and Zhou, 2019). It also includes the two-sample problem as to be elaborated below.…”
Section: A General Results For the De-biased Estimatormentioning
confidence: 99%
See 1 more Smart Citation
“…This framework includes estimating θ = µ T Σ −1 µ using different inputs of estimators for µ and Σ such as robustified estimators (Fan, Wang, Zhu, 2016;Fan, Wang, Zhong and Zhu, 2018;Ke, Minster, Ren, Sun, and Zhou, 2019). It also includes the two-sample problem as to be elaborated below.…”
Section: A General Results For the De-biased Estimatormentioning
confidence: 99%
“…When the data possesses heavier tails, it might be necessary to substitute the sample mean and sample covariance matrix used in α and θ by some robustified versions, in order to achieve a better biasvariance tradeoff. Motivated by recent advances in nonasymptotic deviation analyses of tailrobust estimators for the mean vector and covariance matrix (see Fan, Wang, Zhu (2016), Ke, Minster, Ren, Sun, and Zhou (2019) and references therein), to estimate α and θ under heavier-tailed distributions, we consider the estimators and ϑ in (3.15) and (3.16), with element-wise truncated mean and covariance matrix estimators defined as follows:…”
Section: Suppose the Two Samples {Xmentioning
confidence: 99%
“…We note that the truncation detects the jumps and mitigates their impact on the estimator. Other truncation can also achieve similar goal (see Fan et al (2021)). It will be shown that the proposed adaptive robust estimator T α ij,θ possesses the sub-Weibull concentration bounds (see Theorem 1).…”
mentioning
confidence: 83%
“…Assuming the fourth moments exist and are bounded is significant weaker than the sub-Gaussian assumption. Moreover, such an assumption is prevalent in literatures on robust statistics (Fan et al, 2020b(Fan et al, , 2018(Fan et al, , 2019b. Now we are ready to introduce the theoretical results for the setting with general design.…”
Section: General Designmentioning
confidence: 99%
“…To overcome these difficulties, in our algorithm, instead of estimating ErY ¨Sp 0 pXqs by its empirical counterpart, we construct robust estimators via proper truncation techniques, which have been widely applied in high-dimensional M -estimation problems with heavy-tailed data (Fan et al, 2020b;Zhu, 2017;Wei and Minsker, 2017;Minsker, 2018;Fan et al, 2020a;Ke et al, 2019;Minsker and Wei, 2020). These robust estimators are then employed to compute the update directions of the gradient descent algorithm.…”
Section: Introductionmentioning
confidence: 99%