2008
DOI: 10.1007/978-3-540-87987-9_19
|View full text |Cite
|
Sign up to set email alerts
|

Supermartingales in Prediction with Expert Advice

Abstract: This paper compares two methods of prediction with expert advice, the Aggregating Algorithm and the Defensive Forecasting, in two different settings. The first setting is traditional, with a countable number of experts and a finite number of outcomes. Surprisingly, these two methods of fundamentally different origin lead to identical procedures. In the second setting the experts can give advice conditional on the learner’s future decision. Both methods can be used in the new setting and give the same performan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
2
2
1

Relationship

4
1

Authors

Journals

citations
Cited by 11 publications
(25 citation statements)
references
References 30 publications
0
25
0
Order By: Relevance
“…For the standard undiscounted case (Accountant announces α t = 1 at each step t), this theorem was proved by Vovk in [19] with the help of the Aggregating Algorithm (AA) as Learner's strategy. It is known ( [10,20]) that this bound is asymptotically optimal for large pools of Experts (for games satisfying some assumptions): if the game does not satisfy (3) for some c ≥ 1 and η > 0, then, for sufficiently large K, there is a strategy for Experts and Reality (recall that Accountant always says α t = 1) such that Learner cannot secure (4). For the special case of c = 1, bound (4) is tight for any fixed K as well [21].…”
Section: Linear Bounds For Learner's Lossmentioning
confidence: 99%
See 3 more Smart Citations
“…For the standard undiscounted case (Accountant announces α t = 1 at each step t), this theorem was proved by Vovk in [19] with the help of the Aggregating Algorithm (AA) as Learner's strategy. It is known ( [10,20]) that this bound is asymptotically optimal for large pools of Experts (for games satisfying some assumptions): if the game does not satisfy (3) for some c ≥ 1 and η > 0, then, for sufficiently large K, there is a strategy for Experts and Reality (recall that Accountant always says α t = 1) such that Learner cannot secure (4). For the special case of c = 1, bound (4) is tight for any fixed K as well [21].…”
Section: Linear Bounds For Learner's Lossmentioning
confidence: 99%
“…Algorithm 3 originates in the "Fake Defensive Forecasting" (FDF) algorithm from [5,Theorem 9]. That algorithm is based on the ideas of defensive forecasting [4], in particular, Hoeffding supermartingales [24], combined with the ideas from an early version of the Weak Aggregating Algorithm [13]. However, our analysis in Theorem 2 is completely different from [5], following the lines of [2, Theorem 2.2] and [13].…”
Section: A Bound With Respect To ǫ-Best Expertmentioning
confidence: 99%
See 2 more Smart Citations
“…Discussions with Alex Gammerman, Glenn Shafer, and Alexander Shen, and detailed comments of the anonymous referees for the conference version [4] and the journal version have helped us improve the paper.…”
Section: Acknowledgementsmentioning
confidence: 99%