Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms 2013
DOI: 10.1137/1.9781611973105.34
|View full text |Cite
|
Sign up to set email alerts
|

The Fast Cauchy Transform and Faster Robust Linear Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
105
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 42 publications
(108 citation statements)
references
References 25 publications
3
105
0
Order By: Relevance
“…Note, that the embedding dimension for p > 2 is n 1−2∕p poly(d) which improved upon the previous n∕poly(d) and is close to optimal given the lower bound of (n 1−2∕p ) [86]. The desirable (1 ± ) distortion can be achieved using the embeddings for preconditioning and sampling proportional to the p leverage scores [30,34,96].…”
Section: Lemma 11 (Distributional Johnson-lindenstrauss Lemma) There mentioning
confidence: 81%
See 2 more Smart Citations
“…Note, that the embedding dimension for p > 2 is n 1−2∕p poly(d) which improved upon the previous n∕poly(d) and is close to optimal given the lower bound of (n 1−2∕p ) [86]. The desirable (1 ± ) distortion can be achieved using the embeddings for preconditioning and sampling proportional to the p leverage scores [30,34,96].…”
Section: Lemma 11 (Distributional Johnson-lindenstrauss Lemma) There mentioning
confidence: 81%
“…A first step was done by Woodruff and Sohler [93] who designed the first subspace embedding for 1 via Cauchy random variables. The method is in principle generalizable to using p-stable distributions and was improved in [30,77]. The idea is that the sum of such random variables forms again a random variable from the same type of distribution leading to concentration results for the p norm under study.…”
Section: Lemma 11 (Distributional Johnson-lindenstrauss Lemma) There mentioning
confidence: 99%
See 1 more Smart Citation
“…However, as in the cycle case, rather than evaluating everỹ f λ1,...,λt to find the minimum, it is possible to find the minimum more efficiently. One option is to exploit the convexity of f as in Section 3 using a recursive regression algorithm [13] or to use recent results on robust regression via sub-space embeddings [6,15].…”
Section: Proof Consider a Bijection π Betweenmentioning
confidence: 99%
“…In Table 1, RLA with algorithmic leveraging (RLA for short) [Clarkson et al, 2013, Yang et al, 2014] is a popular method for obtaining a low-precision solution and randomized IPCPM is an iterative method for finding a higher-precision solution [Meng and Mahoney, 2013b] for unconstrained ℓ 1 regression. Clearly, pwSGD has a uniformly better complexity than that of RLA methods in terms of both d and ε , no matter which underlying preconditioning method is used.…”
Section: Introductionmentioning
confidence: 99%