Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms 2019
DOI: 10.1137/1.9781611975482.170
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Algorithms and Lower Bounds for Robust Linear Regression

Abstract: We study the prototypical problem of high-dimensional linear regression in a robust model where an ε-fraction of the samples can be adversarially corrupted. We focus on the fundamental setting where the covariates of the uncorrupted samples are drawn from a Gaussian distribution N (0, Σ) on R d . We give nearly tight upper bounds and computational lower bounds for this problem. Specifically, our main contributions are as follows: *

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
82
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 76 publications
(83 citation statements)
references
References 16 publications
1
82
0
Order By: Relevance
“…In simple LRs two variables are used for finding the predictive function i.e. the predictor variable which the independent variable and the criterion variable which is the dependant variable [35]. We have exploited LR algorithm for forecasting the routing parameters as the predictor variables to generate a predictive model based on observed routing dataset.…”
Section: A Linear Regressionmentioning
confidence: 99%
See 1 more Smart Citation
“…In simple LRs two variables are used for finding the predictive function i.e. the predictor variable which the independent variable and the criterion variable which is the dependant variable [35]. We have exploited LR algorithm for forecasting the routing parameters as the predictor variables to generate a predictive model based on observed routing dataset.…”
Section: A Linear Regressionmentioning
confidence: 99%
“…Similarly irrelevant attributes to the output variable are also removed with feature selection methods. In order to reduce the complexity of the model ridge regularization technique is used that averts any coefficient to reach high value by reducing the absolute sum square of the learned coefficients [35].…”
Section: A Linear Regressionmentioning
confidence: 99%
“…Finally, we would like to add that moment matching was used in prior work to derive statistical query lower bounds for mixture distributions [17,20], to defend against adversarial examples [15], for robust statistics [21] and in other settings. Their proofs required different constructions from the one appearing in this paper.…”
Section: Related Workmentioning
confidence: 99%
“…Ideally, we wish to find a robust algorithm with breakdown point asymptotically convergent to 1 while being consistent under adaptive adversarial corruption. However, it is impossible to satisfy all the three requirements Diakonikolas et al [2018]. A popular approach is to use 1 -norm in the loss function.…”
Section: Related Workmentioning
confidence: 99%