2022
DOI: 10.48550/arxiv.2203.05067
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Universal Regression with Adversarial Responses

Abstract: We provide algorithms for regression with adversarial responses under large classes of non-i.i.d. instance sequences, on general separable metric spaces, with provably minimal assumptions. We also give characterizations of learnability in this regression context. We consider universal consistency which asks for strong consistency of a learner without restrictions on the value responses. Our analysis shows that such objective is achievable for a significantly larger class of instance sequences than stationary p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…This implies that adapting algorithms for specific context processes is necessary to ensure universal learning. This is the first example of such a phenomenon for online learning, for which previously considered settings always admitted optimistically universal learning rules, including realizable (noiseless) supervised learning [2,15,17], arbitrarily noisy (potentially adversarial rewards) supervised learning [19,20], and stationary contextual bandits [1]. Intuitively, we show that personalization and generalization are incompatible for non-stationary contextual bandits.…”
Section: Summary Of the Present Workmentioning
confidence: 80%
See 3 more Smart Citations
“…This implies that adapting algorithms for specific context processes is necessary to ensure universal learning. This is the first example of such a phenomenon for online learning, for which previously considered settings always admitted optimistically universal learning rules, including realizable (noiseless) supervised learning [2,15,17], arbitrarily noisy (potentially adversarial rewards) supervised learning [19,20], and stationary contextual bandits [1]. Intuitively, we show that personalization and generalization are incompatible for non-stationary contextual bandits.…”
Section: Summary Of the Present Workmentioning
confidence: 80%
“…This comes in stark contrast with all the learning frameworks that have been studied in the universal learning literature. Namely, for the noiseless full-feedback [2,17], noisy/adversarial full-feedback [20] and stationary partial-feedback [1] learning frameworks, analysis showed that there always existed an optimistically universal learning rule. Precisely, the optimistically universal learning rule for stationary contextual bandits in finite action spaces provided by [1] combined two strategies:…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations