Motivated by the fact that competitive analysis yields too pessimistic results when applied to the paging problem, there has been considerable research interest in refining competitive analysis and in developing alternative models for studying online paging.In this paper, we propose a new, simple model for studying paging with locality of reference. The model is closely related to Denning's working set concept and directly reflects the amount of locality that request sequences exhibit. We use the page fault rate to evaluate the quality of paging algorithms, which is the performance measure used in practice.We develop tight or nearly tight bounds on the fault rates achieved by popular paging algorithms such as LRU, FIFO, deterministic Marking strategies and LFD. These bounds show that LRU is an optimal online algorithm, whereas FIFO and Marking strategies are not optimal in general. We present an experimental study comparing the page fault rates proven in our analyses to the page fault rates observed in practice.
The relative worst-order ratio, a relatively new measure for the quality of on-line algorithms, is extended and applied to the paging problem. We obtain results significantly different from those obtained with the competitive ratio. First, we devise a new deterministic paging algorithm, Retrospective-LRU, and show that, according to the relative worst-order ratio and in contrast with the competitive ratio, it performs better than LRU. Our experimental results, though not conclusive, are slightly positive and leave it possible that Retrospective-LRU or similar algorithms may be worth considering in practice. Furthermore, the relative worst-order ratio (and practice) indicates that LRU is better than the marking algorithm FWF, though all deterministic marking algorithms have the same competitive ratio. Look-ahead is also shown to be a significant advantage with this new measure, whereas the competitive ratio does not reflect that look-ahead can be helpful. Finally, with the relative worst-order ratio, as with the competitive ratio, no deterministic marking algorithm can be significantly better than LRU, but the randomized algorithm MARK is better than LRU.
Online algorithms with advice is an area of research where one attempts to measure how much knowledge of the future is necessary to achieve a given competitive ratio. The lower bound results give robust bounds on what is possible using semi-online algorithms. On the other hand, when the advice is of an obtainable form, algorithms using advice can lead to semi-online algorithms. There are strong relationships between advice complexity and randomization, and advice complexity has led to the introduction of the first complexity classes for online problems. This survey concerning online algorithms with advice explains the models, motivates the study in general, presents some examples of the work that has been carried out, and includes a fairly complete set of references, organized by problem studied.
We define a new measure for the quality of on-line algorithms, the relative worst order ratio, using ideas from the Max/Max ratio (Ben-David & Borodin 1994) and from the random order ratio (Kenyon 1996). The new ratio is used to compare on-line algorithms directly by taking the ratio of their performances on their respective worst permutations of a worst-case sequence.Two variants of the bin packing problem are considered: the Classical Bin Packing problem, where the goal is to fit all items in as few bins as possible, and the Dual Bin Packing problem, which is the problem of maximizing the number of items packed in a fixed number of bins. Several known algorithms are compared using this new measure, and a new, simple variant of First-Fit is proposed for Dual Bin Packing.Many of our results are consistent with those previously obtained with the competitive ratio or the competitive ratio on accommodating sequences, but new separations and easier proofs are found.
We define a new measure for the quality of online algorithms, the relative worst order ratio, using ideas from the max/max ratio [Ben-David and Borodin 1994] and from the random order ratio [Kenyon 1996]. The new ratio is used to compare online algorithms directly by taking the ratio of their performances on their respective worst permutations of a worst-case sequence.Two variants of the bin packing problem are considered: the classical bin packing problem, where the goal is to fit all items in as few bins as possible, and the dual bin packing problem, which is the problem of maximizing the number of items packed in a fixed number of bins. Several known algorithms are compared using this new measure, and a new, simple variant of first-fit is proposed for dual bin packing.Many of our results are consistent with those previously obtained with the competitive ratio or the competitive ratio on accommodating sequences, but new separations and easier proofs are found. General Terms: AlgorithmsAdditional Key Words and Phrases: Online, quality measure, relative worst order ratio, bin packing, dual bin packing ACM Reference Format: Boyar, J. and Favrholdt, L. M. 2007. The relative worst order ratio for online algorithms. ACM Trans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.