ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9054153
|View full text |Cite
|
Sign up to set email alerts
|

Weighted Gradient Coding with Leverage Score Sampling

Abstract: A major hurdle in machine learning is scalability to massive datasets. Approaches to overcome this hurdle include compression of the data matrix and distributing the computations. Leverage score sampling provides a compressed approximation of a data matrix using an importance weighted subset. Gradient coding has been recently proposed in distributed optimization to compute the gradient using multiple unreliable worker nodes. By designing coding matrices, gradient coded computations can be made resilient to str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(14 citation statements)
references
References 70 publications
0
14
0
Order By: Relevance
“…One could sample a fixed number of times, as is done in most such algorithms. We describe the case where t distinct pairs are sampled, to make the connection with distributed computations more natural; as was done in [13].…”
Section: Weighted Cr-multiplicationmentioning
confidence: 99%
“…One could sample a fixed number of times, as is done in most such algorithms. We describe the case where t distinct pairs are sampled, to make the connection with distributed computations more natural; as was done in [13].…”
Section: Weighted Cr-multiplicationmentioning
confidence: 99%
“…These schemes achieve a better threshold, as they encode linear combinations of submatrices of one or both the matrices, and then carry out the computation on the encoded submatrices, from which a fraction of all the assigned tasks they then decode to retrieve the matrix product. As in GC, in our CMMSs we first carry out the computations and then encode them locally at the worker nodes, e.g., (25) and (28). Our CMMSs meet the optimal recovery threshold known for GC, as they are derived from a GCS.…”
Section: Comparison To Prior Workmentioning
confidence: 99%
“…The Reed-Solomon based scheme was also used as a basis for weighted gradient coding [28]. The idea behind the weighting is similar to that of the weighted CMMS [48] described in V-E.…”
Section: A Reed-solomon Scheme and Weighted Gradient Codingmentioning
confidence: 99%
See 1 more Smart Citation
“…The gradient coding (GC) scheme, introduced in [8], considers gradient estimates on subsets of a dataset as partial computations, and achieves redundancy by replicating parts of the dataset at multiple workers. This approach has been extended in various directions to improve the performance [9]- [14] Let G = {g 1 , . .…”
Section: Extension To Coded Communication Scenariomentioning
confidence: 99%