Proceedings of the Eleventh Annual Conference on Computational Learning Theory 1998
DOI: 10.1145/279943.279949
|View full text |Cite
|
Sign up to set email alerts
|

Tracking the best regressor

Abstract: In most of the on-line learning research the total on-line loss of the algorithm is compared to the total loss of the best off-line predictor u from a comparison class of predictors. We call such bounds static bounds. The interesting feature of these bounds is that they hold for an arbitrary sequence of examples. Recently some work has been done where the comparison vector ut at each trial t is allowed to change with time, and the total online loss of the algorithm is compared to the sum of the losses of ut at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
67
0

Year Published

2000
2000
2010
2010

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 48 publications
(68 citation statements)
references
References 20 publications
1
67
0
Order By: Relevance
“…(12). A similar problem was tackled by Herbster and Warmuth [HW01], who provided O(m log(m)) and O(m) algorithms for performing entropic projections. For completeness, in the rest of this section we outline the simpler O(m log(m)) algorithm.…”
Section: Corollary 13 Assume That the Conditions Stated Inmentioning
confidence: 98%
“…(12). A similar problem was tackled by Herbster and Warmuth [HW01], who provided O(m log(m)) and O(m) algorithms for performing entropic projections. For completeness, in the rest of this section we outline the simpler O(m log(m)) algorithm.…”
Section: Corollary 13 Assume That the Conditions Stated Inmentioning
confidence: 98%
“…Local bounds are thus appropriate when the best predictor for the example sequence is changing over time. There have been a number of papers [28,20,3,43,6,21] which prove loss bounds in terms of a measure of the amount of change of the best predictor over time. These bounds have been called shifting or switching bounds.…”
Section: Methods For Local Loss Boundsmentioning
confidence: 99%
“…These bounds have been called shifting or switching bounds. The local bounds of this section are direct simplifications of the shifting bounds in [21]. Here we give local bounds rather than shifting bounds, however, since less introductory machinery is required, the bounds are easier to interpret, and weaker assumptions on the example sequence are possible in the theorem statements.…”
Section: Methods For Local Loss Boundsmentioning
confidence: 99%
See 1 more Smart Citation
“…Again we aim to bound the total loss of the on-line algorithm minus the total loss of the best comparator of this form. Such bound have been obtained by Herbster and Warmuth (1998) for the case of linear regression using first-order algorithms. We would like to know whether there is a simple second-order algorithm for linear regression that requires O(n 2 ) update time per trial and for which the additional loss grows with the sums of the logs of the section lengths.…”
Section: Conclusion and Open Problemsmentioning
confidence: 99%