We study the problem of fairly allocating a set of indivisible goods to a set of people from an algorithmic perspective. Fair division has been a central topic in the economic literature and several concepts of fairness have been suggested. The criterion that we focus on is the maximum envy between any pair of players. An allocation is called envy-free if every player prefers her own share than the share of any other player. When the goods are divisible or when there is sufficient amount of one divisible good, envy-free allocations always exist. In the presence of indivisibilities however this is not the case. We first show that when all goods are indivisible, there always exist allocations in which the envy is bounded by the maximum marginal utility and we present a simple polynomial time algorithm for computing such allocations. We further show that our algorithm can be applied to the continuous cake-cutting model as well and obtain a procedure that produces -envy-free allocations with a linear number of cuts. We then look at the optimization problem of finding an allocation with minimum possible envy. In the general case, there is no polynomial time algorithm (or even approximation algorithm) for the problem, unless P = NP. We consider natural special cases (e.g. additive utilities) which are closely related to a class of job scheduling problems. Polynomial time approximation algorithms as well as inapproximability results are obtained. Finally we investigate the problem of designing truthful mechanisms for producing allocations with bounded envy.
How does a search engine company decide what ads to display with each query so as to maximize its revenue? This turns out to be a generalization of the online bipartite matching problem. We introduce the notion of a trade-off revealing LP and use it to derive an optimal algorithm achieving a competitive ratio of 1−1/ e for this problem.
In this paper, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(m log m) and O(n 3 ), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem. . 1 This paper is based on the preliminary versions [31] and [21].1 clear so far was that the set cover problem did not require its full power. However, in retrospect, its salient features are best illustrated again in the simple setting of the set cover problem -we do this in Section 9. The method of dual fitting can be described as follows, assuming a minimization problem: The basic algorithm is combinatorial -in the case of set cover it is in fact a simple greedy algorithm. Using the linear programming relaxation of the problem and its dual, one first interprets the combinatorial algorithm as a primal-dual-type algorithm -an algorithm that is iteratively making primal and dual updates. Strictly speaking, this is not a primal-dual algorithm, since the dual solution computed is, in general, infeasible (see Section 9 for a discussion on this issue). However, one shows that the primal integral solution found by the algorithm is fully paid for by the dual computed. By fully paid for we mean that the objective function value of the primal solution is bounded by that of the dual. The main step in the analysis consists of dividing the dual by a suitable factor, say γ, and showing that the shrunk dual is feasible, i.e., it fits into the given instance. The shrunk dual is then a lower bound on OPT, and γ is the approximation guarantee of the algorithm.Clearly, we need to find the minimum γ that suffices. Equivalently, this amounts to finding the worst possible instance -one in which the dual solution needs to be shrunk the most in order to be rendered feasible. For each value of n c , the number of cities, we define a factor-revealing LP that encodes the problem of finding the worst possible instance with n c cities as a linear program. This gives a family of LP's, one for each value of n c . The supremum of the optimal solutions to these LP's is then the best value for γ. In our case, we do not know how to compute this supremum directly. Instead, we obtain a feasible solution to the dual of each of these LP's. An upper bound on the objective function values of these duals can be computed, and is an upper bound on the optimal γ. In our case, this upper bound is 1.861 for the first algorithm and 1.61 for the second one. In order to get a closely matching tight example, we numerically solve the factor-revealing LP for a large value of n c .The technique of factor-revealing LPs is similar to the idea of LP bounds in coding theory. LP bounds give the best known bounds on the...
Abstract-We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. For searching, we argue that random walks achieve improvement over flooding in the case of clustered overlay topologies and in the case of re-issuing the same request several times. For construction, we argue that an expander can be maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk can achieve statistical properties similar to independent sampling (if the second eigenvalue of the transition matrix is bounded away from 1, which translates to good expansion of the network; such connectivity is desired, and believed to hold, in every reasonable network and network model). This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.