We generalize recent work to construct a map from the conformal Navier Stokes equations with holographically determined transport coefficients, in d spacetime dimensions, to the set of asymptotically locally AdS d+1 long wavelength solutions of Einstein's equations with a negative cosmological constant, for all d > 2. We find simple explicit expressions for the stress tensor (slightly generalizing the recent result by Haack and Yarom (arXiv:0806.4602) ), the full dual bulk metric and an entropy current of this strongly coupled conformal fluid, to second order in the derivative expansion, for arbitrary d > 2. We also rewrite the well known exact solutions for rotating black holes in AdS d+1 space in a manifestly fluid dynamical form, generalizing earlier work in d = 4. To second order in the derivative expansion, this metric agrees with our general construction of the metric dual to fluid flows.
Combinatorial Auctions are a central problem in Algorithmic Mechanism Design: pricing and allocating goods to buyers with complex preferences in order to maximize some desired objective (e.g., social welfare, revenue, or profit). The problem has been well-studied in the case of limited supply (one copy of each item), and in the case of digital goods (the seller can produce additional copies at no cost). Yet in the case of resources-oil, labor, computing cycles, etc.-neither of these abstractions is just right: additional supplies of these resources can be found, but at increasing difficulty (marginal cost) as resources are depleted.In this work, we initiate the study of the algorithmic mechanism design problem of combinatorial pricing under increasing marginal cost. The goal is to sell these goods to buyers with unknown and arbitrary combinatorial valuation functions to maximize either the social welfare, or the seller's profit; specifically we focus on the setting of posted item prices with buyers arriving online. We give algorithms that achieve constant factor approximations for a class of natural cost functions-linear, low-degree polynomial, logarithmic-and that give logarithmic approximations for arbitrary increasing marginal cost functions (along with a necessary additive loss). We show that these bounds are essentially best possible for these settings.
The stochastic matching problem deals with finding a maximum matching in a graph whose edges are unknown but can be accessed via queries. This is a special case of stochastic k-set packing, where the problem is to find a maximum packing of sets, each of which exists with some probability. In this paper, we provide edge and set query algorithms for these two problems, respectively, that provably achieve some fraction of the omniscient optimal solution.Our main theoretical result for the stochastic matching (i.e., 2-set packing) problem is the design of an adaptive algorithm that queries only a constant number of edges per vertex and achieves a (1 − ) fraction of the omniscient optimal solution, for an arbitrarily small > 0. Moreover, this adaptive algorithm performs the queries in only a constant number of rounds. We complement this result with a non-adaptive (i.e., one round of queries) algorithm that achieves a (0.5 − ) fraction of the omniscient optimum. We also extend both our results to stochastic k-set packing by designing an adaptive algorithm that achieves a ( 2 k − ) fraction of the omniscient optimal solution, again with only O(1) queries per element. This guarantee is close to the best known polynomial-time approximation ratio of 3 k+1 − for the deterministic k-set packing problem [22].We empirically explore the application of (adaptations of) these algorithms to the kidney exchange problem, where patients with end-stage renal failure swap willing but incompatible donors. We show on both generated data and on real data from the first 169 match runs of the UNOS nationwide kidney exchange that even a very small number of non-adaptive edge queries per vertex results in large gains in expected successful matches. arXiv:1407.4094v2 [cs.DS] 29 Apr 2015 interest, then, are algorithms that first query some subset of edges to find the ones that exist, and based on these queries, produce a matching that is as large as possible. The stochastic matching problem is a special case of stochastic k-set packing, where each set exists only with some probability, and the problem is to find a packing of maximum size of those sets that do exist.Without any constraints, one can simply query all edges or sets, and then output the maximum matching or packing over those that exist-but this level of freedom may not always be available. We are interested in the tradeoff between the number of queries and the fraction of the omnsicient optimal solution achieved. Specifically, we ask: In order to perform as well as the omniscient optimum in the stochastic matching problem, do we need to query (almost) all the edges, that is, do we need a budget of Θ(n) queries per vertex, where n is the number of vertices? Or, can we, for any arbitrarily small > 0, achieve a (1 − ) fraction of the omniscient optimum by using an o(n) per-vertex budget? We answer these questions, as well as their extensions to the k-set packing problem. We support our theoretical results empirically on both generated and real data from a large fielded kidney exchange ...
We design new approximation algorithms for the Multiway Cut problem, improving the previously known factor of 1.32388 [Buchbinder et al., 2013].We proceed in three steps. First, we analyze the rounding scheme of Buchbinder et al. [2013] and design a modification that improves the approximation to 3+ √ 5 4 ≈ 1.309017. We also present a tight example showing that this is the best approximation one can achieve with the type of cuts considered by Buchbinder et al. [2013]: (1) partitioning by exponential clocks, and (2) single-coordinate cuts with equal thresholds.Then, we prove that this factor can be improved by introducing a new rounding scheme: (3) singlecoordinate cuts with descending thresholds. By combining these three schemes, we design an algorithm that achieves a factor of 10+4 √ 3 13 ≈ 1.30217. This is the best approximation factor that we are able to verify by hand.Finally, we show that by combining these three rounding schemes with the scheme of independent thresholds from Karger et al. [2004], the approximation factor can be further improved to 1.2965. This approximation factor has been verified only by computer.
Kidney exchanges allow incompatible donor-patient pairs to swap kidneys, but each donation must pass three tests: blood, tissue, and crossmatch. In practice a matching is computed based on the first two tests, and then a single crossmatch test is performed for each matched patient. However, if two crossmatches could be performed per patient, in principle significantly more successful exchanges could take place. In this paper, we ask: If we were allowed to perform two crossmatches per patient, could we harness this additional power optimally and efficiently? Our main result is a polynomial time algorithm for this problem that almost surely computes optimal -up to lower order terms -solutions on random large kidney exchange instances.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.