2020
DOI: 10.1609/aaai.v34i04.6149
|View full text |Cite
|
Sign up to set email alerts
|

Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond

Abstract: Recursive least-squares algorithms often use forgetting factors as a heuristic to adapt to non-stationary data streams. The first contribution of this paper rigorously characterizes the effect of forgetting factors for a class of online Newton algorithms. For exp-concave and strongly convex objectives, the algorithms achieve the dynamic regret of max{O(log T),O(√TV)}, where V is a bound on the path length of the comparison sequence. In particular, we show how classic recursive least-squares with a forgetting f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…Online control: Online regret analysis is an extensively studied topic in online learning problems (Hazan, 2019;Shalev-Shwartz et al, 2011;Yuan and Lamperski, 2020). Recently, many papers studied the online regret performance in control problems with general time-varying costs, disturbances and known system model (Abbasi-Yadkori et al, 2014;Cohen et al, 2018;Agarwal et al, 2019a,b;Goel and Wierman, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Online control: Online regret analysis is an extensively studied topic in online learning problems (Hazan, 2019;Shalev-Shwartz et al, 2011;Yuan and Lamperski, 2020). Recently, many papers studied the online regret performance in control problems with general time-varying costs, disturbances and known system model (Abbasi-Yadkori et al, 2014;Cohen et al, 2018;Agarwal et al, 2019a,b;Goel and Wierman, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…By hedging over a collection of OGD algorithms defined by exponential grid of step sizes, (Zhang et al, 2018a) proposes an algorithm that achieves a faster rate of O( n(1 + P n )) which is shown to be minimax optimal when the loss functions are convex. (Yuan and Lamperski, 2019) proposes strategies that can attain regret rates of…”
Section: Related Workmentioning
confidence: 99%
“…In the OCO setting, when the environment is benign, (Zhao et al, 2020) replaces the √ n dependence in the regret of O( n(1 + P n )) attained by (Zhang et al, 2018b) with problem dependent quantities that could be much smaller than √ n. Although the linear smoother lower bound in Proposition 2 of (Baby and Wang, 2019) would imply that an OEGD (Online Extra Gradient Descent) (Zhao et al, 2020) expert with any learning rate sequences require Ω( √ nP n ) dynamic regret for the 1D-TV-denoising problem. Interestingly in (Yuan and Lamperski, 2019) the authors mention that even in the one-dimensional setting, a lower bound on dynamic regret for strongly convex / exp-concave losses that holds uniformly for the entire range 0 ≤ P n ≤ n is unknown. However, we find that one can combine the existing lower bounds on univariate TV-denoising in a stochastic setting (Donoho and Johnstone, 1998) with the lower bound construction of (Vovk, 2001) (or see Theorem 11.9 in (Cesa-Bianchi and Lugosi, 2006)) for online learning with squared error losses to obtain an Ω(log n ∨ n 1/3 C 2/3 n ) in one dimensions (see Appendix C for details).…”
Section: A More On Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…which draws considerable attention recently (Besbes et al, 2015;Jadbabaie et al, 2015;Mokhtari et al, 2016;Yang et al, 2016;Wei et al, 2016;Zhang et al, 2017Zhang et al, , 2018aAuer et al, 2019;Baby and Wang, 2019;Yuan and Lamperski, 2020;Zhao et al, 2020a;Zhang et al, 2020a,b;Zhao et al, 2021a;Zhao and Zhang, 2021). The measure is also called the universal dynamic regret (or general dynamic regret), in the sense that it gives a universal guarantee that holds against any comparator sequence.…”
Section: Introductionmentioning
confidence: 99%