2017
DOI: 10.48550/arxiv.1703.10444
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Fundamental Limits of Robust Learning

Jiashi Feng

Abstract: We consider the problems of robust PAC learning from distributed and streaming data, which may contain malicious errors and outliers, and analyze their fundamental complexity questions. In particular, we establish lower bounds on the communication complexity for distributed robust learning performed on multiple machines, and on the space complexity for robust learning from streaming data on a single machine. These results demonstrate that gaining robustness of learning algorithms is usually at the expense of i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…However, they only focus on label corruptions. Feng (2017) consider the fundamental limits of learning from adversarial distributed data, but in the case when each of the nodes can iteratively send corrupted updates with certain probability. Feng et al (2014) provide a method for distributing the computation of any robust learning algorithm that operates on a single large dataset.…”
Section: Related Workmentioning
confidence: 99%
“…However, they only focus on label corruptions. Feng (2017) consider the fundamental limits of learning from adversarial distributed data, but in the case when each of the nodes can iteratively send corrupted updates with certain probability. Feng et al (2014) provide a method for distributing the computation of any robust learning algorithm that operates on a single large dataset.…”
Section: Related Workmentioning
confidence: 99%