2021
DOI: 10.1145/3447579
|View full text |Cite
|
Sign up to set email alerts
|

Competitive Caching with Machine Learned Advice

Abstract: Traditional online algorithms encapsulate decision making under uncertainty, and give ways to hedge against all possible future events, while guaranteeing a nearly optimal solution, as compared to an offline optimum. On the other hand, machine learning algorithms are in the business of extrapolating patterns found in the data to predict the future, and usually come with strong guarantees on the expected generalization error. In this work, we develop a framework for augmenting online algorithms with a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

7
271
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 138 publications
(279 citation statements)
references
References 36 publications
7
271
1
Order By: Relevance
“…In the wider context of online algorithms with predictions, the three goals discussed in this paper can be stated more generally: Consistency: In the limit as the prediction quality becomes perfect, the performance should approach that of the optimal algorithm with perfect information. For instance, one might try to achieve -consistency [7], which requires that the competitive ratio is bounded by as the error in the predictions goes to 0. Robustness: In the limit as the prediction quality becomes arbitrarily poor, the performance should be comparable to that of the optimal algorithm in with perfect information.…”
Section: Discussion Of Our Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In the wider context of online algorithms with predictions, the three goals discussed in this paper can be stated more generally: Consistency: In the limit as the prediction quality becomes perfect, the performance should approach that of the optimal algorithm with perfect information. For instance, one might try to achieve -consistency [7], which requires that the competitive ratio is bounded by as the error in the predictions goes to 0. Robustness: In the limit as the prediction quality becomes arbitrarily poor, the performance should be comparable to that of the optimal algorithm in with perfect information.…”
Section: Discussion Of Our Resultsmentioning
confidence: 99%
“…Robustness: In the limit as the prediction quality becomes arbitrarily poor, the performance should be comparable to that of the optimal algorithm in with perfect information. For instance, one might try to achieve -robustness [7], which requires that the competitive ratio is bounded by under arbitrarily poor predictions. Graceful Degradation: As the prediction quality worsens from perfect to worthless, the performance should degrade smoothly and slowly.…”
Section: Discussion Of Our Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, there has been significant interest in unifying these two perspectives, with the goal of developing algorithms that balance between consistency and robustness. To this point, progress has been made in a few online algorithms settings, e.g., the ski-rental, online matching, non-clairvoyant scheduling problems [15,17,22,4,7,6,5]. In these settings, algorithms have been designed that can achieve near-optimal performance when the prediction error is small while also maintaining robustness when the prediction error is large.…”
Section: Introductionmentioning
confidence: 99%