2016
DOI: 10.48550/arxiv.1604.06162
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Extended Littlestone's Dimension for Learning with Mistakes and Abstentions

Abstract: This paper studies classification with an abstention option in the online setting. In this setting, examples arrive sequentially, the learner is given a hypothesis class H, and the goal of the learner is to either predict a label on each example or abstain, while ensuring that it does not make more than a pre-specified number of mistakes when it does predict a label.Previous work on this problem has left open two main challenges. First, not much is known about the optimality of algorithms, and in particular, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 21 publications
(16 reference statements)
0
2
0
Order By: Relevance
“…predictor is always correct) among a set of predictors and aims to find this perfect predictor with a minimum number of refusals. Sayedi et al [31] extend this framework by allowing a fixed error budget k and characterizing the minimum number of refusals by keeping the number of errors below k. Finally, Zhang et al [32] relax the perfect predictor assumption to an l-bias assumption (i.e. predictor makes at most l errors) and allow the algorithm to refuse in order to achieve an optimal error-refuse trade-off.…”
Section: Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…predictor is always correct) among a set of predictors and aims to find this perfect predictor with a minimum number of refusals. Sayedi et al [31] extend this framework by allowing a fixed error budget k and characterizing the minimum number of refusals by keeping the number of errors below k. Finally, Zhang et al [32] relax the perfect predictor assumption to an l-bias assumption (i.e. predictor makes at most l errors) and allow the algorithm to refuse in order to achieve an optimal error-refuse trade-off.…”
Section: Prior Workmentioning
confidence: 99%
“…where (32) follows from the sum of a geometric series and (33) follows from the definition of K * . Finally, the desired result is obtained by dividing both sides by T * .…”
Section: Algorithm 3 Safepredict With Doubling Trickmentioning
confidence: 99%