2008
DOI: 10.1016/j.jcss.2007.08.003
|View full text |Cite
|
Sign up to set email alerts
|

The weak aggregating algorithm and weak mixability

Abstract: This paper resolves the problem of predicting as well as the best expert up to an additive term of the order o(n), where n is the length of a sequence of letters from a finite alphabet. We call the games that permit this weakly mixable and give a geometrical characterisation of the class of weakly mixable games. Weak mixability turns out to be equivalent to convexity of the finite part of the set of superpredictions. For bounded games we introduce the Weak Aggregating Algorithm that allows us to obtain additiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
33
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 46 publications
(33 citation statements)
references
References 8 publications
0
33
0
Order By: Relevance
“…The game of prediction is being played repeatedly by a learner that has access to decisions made by a pool of experts, which leads to the following prediction protocol: Here L N is the cumulative loss of the learner at a time step N , and L k N is the cumulative loss of kth expert at this step. There are a lot of well-developed algorithms for the learner, probably the most known are Weighted Average Algorithm [8], Strong Aggregating Algorithm [11,12], Weak Aggregating Algorithm [7], Hedge Algorithm [4], and Tracking the Best Expert [6]. The basic idea behind these algorithms is to assign weights to experts and then use their predictions in the correspondence with their weights in a way that minimizes the learner's loss.…”
Section: Online Prediction Framework and Aggregating Algorithmmentioning
confidence: 99%
“…The game of prediction is being played repeatedly by a learner that has access to decisions made by a pool of experts, which leads to the following prediction protocol: Here L N is the cumulative loss of the learner at a time step N , and L k N is the cumulative loss of kth expert at this step. There are a lot of well-developed algorithms for the learner, probably the most known are Weighted Average Algorithm [8], Strong Aggregating Algorithm [11,12], Weak Aggregating Algorithm [7], Hedge Algorithm [4], and Tracking the Best Expert [6]. The basic idea behind these algorithms is to assign weights to experts and then use their predictions in the correspondence with their weights in a way that minimizes the learner's loss.…”
Section: Online Prediction Framework and Aggregating Algorithmmentioning
confidence: 99%
“…This subsection will introduce the main technical tool used in this paper, an aggregating algorithm (in fact intermediate between the strong aggregating algorithm of [42] and the weak aggregating algorithm of [23]). For future use in §5, we will allow the observations to belong to the Euclidean space R m (in fact, we will only be interested in the cases m = 1 and m = 2).…”
Section: Aggregating Algorithmmentioning
confidence: 99%
“…In Section 2, we propose a generalization of the Aggregating Algorithm [20] and prove the same bound as in [20] but for the discounted loss. In Section 3, we consider convex loss functions and propose an algorithm similar to the Weak Aggregating Algotihm [14] and the exponentially weighted average forecaster with time-varying learning rate [2, § 2.3], with a similar loss bound. In Section 4, we consider the use of prediction with expert advice for the regression problem and adapt the Aggregating Algorithm for Regression [22] (applied to spaces of linear functions and to reproducing kernel Hilbert spaces) to the discounted square loss.…”
Section: Introductionmentioning
confidence: 99%