Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing 2020
DOI: 10.1145/3357713.3384329
|View full text |Cite
|
Sign up to set email alerts
|

Algorithms for heavy-tailed statistics: regression, covariance estimation, and beyond

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(17 citation statements)
references
References 8 publications
0
17
0
Order By: Relevance
“…Lai, Rao, and Vempala similarly use spectral methods for agnostic mean estimation [12]. Hopkins and others achieve similar robustness results for both Gaussian and heavy-tailed distributions, relying on Sum-of-Squares proofs and semi definite programming instead of filtering methods [8,2,9]. This field continues to evolve, with recent works applying gradient estimation and descent to the Sum-of-Squares hierarchy to yield similar error guarantees with faster runtime [1].…”
Section: Related and Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Lai, Rao, and Vempala similarly use spectral methods for agnostic mean estimation [12]. Hopkins and others achieve similar robustness results for both Gaussian and heavy-tailed distributions, relying on Sum-of-Squares proofs and semi definite programming instead of filtering methods [8,2,9]. This field continues to evolve, with recent works applying gradient estimation and descent to the Sum-of-Squares hierarchy to yield similar error guarantees with faster runtime [1].…”
Section: Related and Prior Workmentioning
confidence: 99%
“…The specific adversarial framework they use, known as γ-corruption, is powerful and can easily be generalized to other models. 2 Definition 2.3 (γ-corruption [4]). Given γ > 0 and a set of samples of size m, the samples are γ-corrupted if an adversary is allowed to inspect the samples, remove m ∼ Bin(γ, m) of them, and replace them with arbitrary points.…”
Section: Robust High-dimensional Mean Estimation By Filteringmentioning
confidence: 99%
“…As sketched in [43, Appendix A], this class contains many popular and powerful frameworks, including local algorithms on graphs, power iteration, and approximate message passing [35,11,56,64,36]. In addition, a recent flurry of work has shown that for many average-case problems in high-dimensional statistics, including planted clique, sparse PCA, community detection, and tensor PCA, low degree polynomials are as powerful as the best polynomial-time algorithms known [55,54,53,7,61,34,22,15,63,70,8,16]. Thus, showing that low degree polynomial algorithms fail at some threshold provides evidence that all polynomial-time algorithms fail at that threshold.…”
Section: Introductionmentioning
confidence: 99%
“…) with probability 1 − δ, and they need a number of sample of order N log(1/δ)d. The article [11] has been the first to construct a polynomial-time method achieving the rate of the OLS in the Gaussian setting ( f ) − (f * ) ≤ O( log(1/δ)∨d N ). To the date, it is the only procedure running in polynomial algorithm achieving the optimal subgaussian rate.…”
Section: Introductionmentioning
confidence: 99%
“…The goal is to design a procedure which comes with actual working code that attains the optimal sub-gaussian error bound even though the data have only finite moments (up to L4) and in the presence of possibly adversarial outliers. A polynomial-time solution to this problem has been recently discovered [11] but has high runtime due to its use of Sum-of-Square hierarchy programming. At the core of our algorithm is an adaptation of the spectral method introduced in [35] for the mean estimation problem to the linear regression problem.…”
mentioning
confidence: 99%