2011
DOI: 10.1561/9781601985477
|View full text |Cite
|
Sign up to set email alerts
|

Online Learning and Online Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
629
0
8

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 526 publications
(641 citation statements)
references
References 34 publications
4
629
0
8
Order By: Relevance
“…In what follows, we may then derive the corresponding bound on the maximum deviation simply using (4).…”
Section: Bounds On the Maximum Estimation Deviationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In what follows, we may then derive the corresponding bound on the maximum deviation simply using (4).…”
Section: Bounds On the Maximum Estimation Deviationsmentioning
confidence: 99%
“…Sequential prediction (or sequence prediction) [1][2][3][4][5] has been an important component of sequential learning. In this paper, we will utilize information theory to analyze the fundamental performance bounds of sequential prediction.…”
Section: Introductionmentioning
confidence: 99%
“…Subscript t at the cost function (2) reminds us its dependence on the request r t that is generated at t. Since these events may vary according to a non-stationary process, we will use the concept of regret from online convex optimization [22].…”
Section: Problem Formulationmentioning
confidence: 99%
“…This assumption reflects that, in practice, caches are populated before the requests are issued. Since by Lemma 1 f t (y) are convex, our problem falls in the Online Convex Optimization framework [22]. The performance metric of an algorithm in this line of work is the regret: the difference between costs incurred by the algorithm and the best static configuration in hindsight.…”
Section: Problem Formulationmentioning
confidence: 99%
“…be a sequence generalized by the update rule (1) in the case where only part labels of instances are revealed and e t be the gap as defined in (3). Then, for any f ∈  and any stepsize > 0, we have…”
Section: Estimated Gradient For Online Learning With Sparse Labelsmentioning
confidence: 99%