2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on "real world" Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every p ∈ N, a family of L p instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0, 1] 2 , where it was shown that the expected number of steps is bounded byÕ(n 10) for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0, 1] d , for an arbitrary d ≥ 2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density φ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path ofÕ(n 4+1/3 • φ 8/3
In the classic minimum makespan scheduling problem, we are given an input sequence of jobs with processing times. A scheduling algorithm has to assign the jobs to m parallel machines. The objective is to minimize the makespan, which is the time it takes until all jobs are processed. In this paper, we consider online scheduling algorithms without preemption. However, we do not require that each arriving job has to be assigned immediately to one of the machines. A reordering buffer with limited storage capacity can be used to reorder the input sequence in a restricted fashion so as to schedule the jobs with a smaller makespan. This is a natural extension of lookahead.We present an extensive study of the power and limits of online reordering for minimum makespan scheduling. As main result, we give, for m identical machines, tight and, in comparison to the problem without reordering, much improved bounds on the competitive ratio for minimum makespan scheduling with reordering buffers. Depending on m, the achieved competitive ratio lies between 4/3 and 1.4659. This optimal ratio is achieved with a buffer of size Θ(m). We show that larger buffer sizes do not result in an additional advantage and that a buffer of size Ω(m) is necessary to achieve this competitive ratio. Further, we present several algorithms for different buffer sizes. Among others, we introduce, for every buffer size k ∈ [1, (m + 1)/2], a (2 − 1/(m − k + 1))-competitive algorithm, which nicely generalizes the well-known result of Graham.For m uniformly related machines, we give a scheduling algorithm that achieves a competitive ratio of 2 with a reordering buffer of size m. Considering that the best known * Supported by DFG grant WE 2842/1. competitive ratio for uniformly related machines without reordering is 5.828, this result emphasizes the power of online reordering further more.
Abstract. We consider the problem of resource allocation in a parallel environment where new incoming resources are arriving online in groups or batches. We study this scenario in an abstract framework of allocating balls into bins. We revisit the allocation algorithm GREEDY[2] due to Azar, Broder, Karlin, and Upfal (SIAM J. Comput. 1999), in which, for sequentially arriving balls, each ball chooses two bins at random, and gets placed into one of those two bins with minimum load. The maximum load of any bin after the last ball is allocated by GREEDY[2] is well understood, as is, indeed, the entire load distribution, for a wide range of settings. The main goal of our paper is to study balls and bins allocation processes in a parallel environment with the balls arriving in batches. In our model, m balls arrive in batches of size n each (with n being also equal to the number of bins), and the balls in each batch are to be distributed among the bins simultaneously. In this setting, we consider an algorithm that uses GREEDY [2] for all balls within a given batch, the answers to those balls' load queries are with respect to the bin loads at the end of the previous batch, and do not in any way depend on decisions made by other balls from the same batch. Our main contribution is a tight analysis of the new process allocating balls in batches: we show that after the allocation of any number of batches, the gap between maximum and minimum load is O(log n) with high probability, and is therefore independent of the number of batches used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.