2022
DOI: 10.48550/arxiv.2202.07554
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness

Abstract: Stochastic and adversarial data are two widely studied settings in online learning. But many optimization tasks are neither i.i.d. nor fully adversarial, which makes it of fundamental interest to get a better theoretical understanding of the world between these extremes. In this work we establish novel regret bounds for online convex optimization in a setting that interpolates between stochastic i.i.d. and fully adversarial losses. By exploiting smoothness of the expected losses, these bounds replace a depende… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 5 publications
0
1
0
Order By: Relevance
“…Moreover, our analysis assumes perfect tuning of constants (e.g., D, T, K) for simplicity. In practice, we would prefer to adapt to unknown parameters, motivating new applications and problems for adaptive online learning, which is already an area of active current investigation (see, e.g., Orabona & Pál, 2015;Hoeven et al, 2018;Cutkosky & Orabona, 2018;Cutkosky, 2019;Mhammedi & Koolen, 2020;Chen et al, 2021;Sachs et al, 2022;Wang et al, 2022). It is our hope that some of this expertise can be applied in the non-convex setting as well.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, our analysis assumes perfect tuning of constants (e.g., D, T, K) for simplicity. In practice, we would prefer to adapt to unknown parameters, motivating new applications and problems for adaptive online learning, which is already an area of active current investigation (see, e.g., Orabona & Pál, 2015;Hoeven et al, 2018;Cutkosky & Orabona, 2018;Cutkosky, 2019;Mhammedi & Koolen, 2020;Chen et al, 2021;Sachs et al, 2022;Wang et al, 2022). It is our hope that some of this expertise can be applied in the non-convex setting as well.…”
Section: Discussionmentioning
confidence: 99%