2022
DOI: 10.48550/arxiv.2202.04690
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Smoothed Online Learning is as Easy as Statistical Learning

Abstract: Much of modern learning theory has been split between two regimes: the classical offline setting, where data arrive independently, and the online setting, where data arrive adversarially. While the former model is often both computationally and statistically tractable, the latter requires no distributional assumptions. In an attempt to achieve the best of both worlds, previous work proposed the smooth online setting where each sample is drawn from an adversarially chosen distribution, which is smooth, i.e., it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
19
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(20 citation statements)
references
References 24 publications
1
19
0
Order By: Relevance
“…In this section, we provide basic definitions and setup the learning problem. We begin by defining a smooth distribution, as in Block et al [2022], Haghtalab et al [2021]: Definition 1. Let µ be a probability measure on a measurable space X.…”
Section: Preliminariesmentioning
confidence: 99%
See 4 more Smart Citations
“…In this section, we provide basic definitions and setup the learning problem. We begin by defining a smooth distribution, as in Block et al [2022], Haghtalab et al [2021]: Definition 1. Let µ be a probability measure on a measurable space X.…”
Section: Preliminariesmentioning
confidence: 99%
“…To circumvent the pessimism of the sequential setting, recent works [Rakhlin et al, 2011, Haghtalab et al, 2020, Block et al, 2022, Haghtalab et al, 2022 have studied the smoothed sequential learning paradigm, where the adversary is constrained to choose x t at random from any probability distribution p t with density at most 1/σ with respect to a known measure µ. The most current of these results point to a striking statistical computational gap: whereas there exist algorithms which attain regret that scales with T log(/σ), computationally efficient algorithms can only hope for poly(T /σ) regret in general, even against a realizable adversary [Haghtalab et al, 2022, Theorem 5.2].…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations