2020
DOI: 10.48550/arxiv.2002.10384
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Sample Complexity of Adversarial Multi-Source PAC Learning

Nikola Konstantinov,
Elias Frantar,
Dan Alistarh
et al.

Abstract: We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms. Specifically, we analyze the situation in which a learning system obtains datasets from multiple sources, some of which might be biased or even adversarially perturbed. It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnabilit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…For the multisource case N 1, to the best of our knowledge, situations of negative transfer have only been described in adversarial settings with corrupted labels. For instance, the recent papers of [Qia18,MMM19,KFAL20] show limits of multitask under various adversarial corruption of labels in datasets, while [SZ19] derives a positive result, i.e., rates (for Lipschitz loss) decreasing in both N and n, up to excluded or downweighted datasets. The procedure of [SZ19] is however nonadaptive as it requires known noise proportions.…”
Section: Background and Related Workmentioning
confidence: 99%
“…For the multisource case N 1, to the best of our knowledge, situations of negative transfer have only been described in adversarial settings with corrupted labels. For instance, the recent papers of [Qia18,MMM19,KFAL20] show limits of multitask under various adversarial corruption of labels in datasets, while [SZ19] derives a positive result, i.e., rates (for Lipschitz loss) decreasing in both N and n, up to excluded or downweighted datasets. The procedure of [SZ19] is however nonadaptive as it requires known noise proportions.…”
Section: Background and Related Workmentioning
confidence: 99%