1998
DOI: 10.1007/pl00013825
|View full text |Cite
|
Sign up to set email alerts
|

On Bayes Methods for On-Line Boolean Prediction

Abstract: We examine a general Bayesian framework for constructing on-line prediction algorithms in the experts setting. These algorithms predict the bits of an unknown Boolean sequence using the advice of a finite set of experts. In this framework we use probabilistic assumptions on the unknown sequence to motivate prediction strategies. However, the relative bounds that we prove on the number of prediction mistakes made by these strategies hold for any sequence. The Bayesian framework provides a unified derivation and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

1998
1998
2003
2003

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 11 publications
1
5
0
Order By: Relevance
“…This bound improves on a result obtained by Cesa-Bianchi, Helmbold, and Panizza (1998), which was essentially the best robust bound to date. In the case of randomized binary classification we adapt the p-norm algorithm to learn with respect to an infinite pool of experts.…”
supporting
confidence: 80%
See 2 more Smart Citations
“…This bound improves on a result obtained by Cesa-Bianchi, Helmbold, and Panizza (1998), which was essentially the best robust bound to date. In the case of randomized binary classification we adapt the p-norm algorithm to learn with respect to an infinite pool of experts.…”
supporting
confidence: 80%
“…Bounds for algorithms that do not need K are provided by Cesa-Bianchi et al (1997) and Cesa-Bianchi, Helmbold, and Panizza (1998). In the first paper the authors exploit a sophisticated doubling trick to repeatedly re-estimate the best η for Weighted Majority.…”
Section: If P = 2 Ln N+|k−2r +1| K+|k−2r +1|mentioning
confidence: 99%
See 1 more Smart Citation
“…Aside from the regression case, another different but seemingly related learning model is the so-called expert case (e.g., Littlestone & Warmuth, 1989;Vovk, 1990;Cesa-Bianchi et al, 1997;Cesa-Bianchi, Helmbold & Panizza, 1996), where there is only a single relevant attribute (that is, perfect classification can be accomplished by a discriminant with a single non-zero weight). Algorithms for learning general linear discriminants can be of course still be applied in this case, so it would be interesting to see if our analysis adds any additional insight.…”
Section: Related Workmentioning
confidence: 99%
“…For further information on the theory of prediction with expert advice, the reader can also consult, e.g., (Auer & Long, 1994;Cesa-Bianchi, Helmbold, & Panizza, 1996;Feder, Merhav, & Gutman, 1992;Yamanishi, 1995).…”
Section: Introductionmentioning
confidence: 99%