We study incentive compatible mechanisms for Combinatorial Auctions where the bidders have submodular (or XOS) valuations and are budget-constrained. Our objective is to maximize the liquid welfare, a notion of efficiency for budget-constrained bidders introduced by Dobzinski and Paes Leme (2014). We show that some of the known truthful mechanisms that bestapproximate the social welfare for Combinatorial Auctions with submodular bidders through demand query oracles can be adapted, so that they retain truthfulness and achieve asymptotically the same approximation guarantees for the liquid welfare. More specifically, for the problem of optimizing the liquid welfare in Combinatorial Auctions with submodular bidders, we obtain a universally truthful randomized O(log m)-approximate mechanism, where m is the number of items, by adapting the mechanism of Krysta and Vöcking (2012).Additionally, motivated by large market assumptions often used in mechanism design, we introduce a notion of competitive markets and show that in such markets, liquid welfare can be approximated within a constant factor by a randomized universally truthful mechanism. Finally, in the Bayesian setting, we obtain a truthful O(1)-approximate mechanism for the case where bidder valuations are generated as independent samples from a known distribution, by adapting the results of Feldman, Gravin and Lucier (2014).
Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms -like policy gradient and its variants -are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the Reinforce algorithm achieve an O(1/ √ n) distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.
In this paper, we introduce a class of learning dynamics for general quantum games, that we call "follow the quantum regularized leader" (FTQL), in reference to the classical FTRL template for finite games. We show that the induced quantum state dynamics decompose into (i ) a classical, commutative component which governs the dynamics of the system's eigenvalues in a way analogous to the evolution of mixed strategies under FTRL; and (ii ) a non-commutative component for the system's eigenvectors which has no classical counterpart. Despite the complications that this non-classical component entails, we find that the FTQL dynamics incur no more than constant regret in all quantum games. Moreover, adjusting classical notions of stability to account for the nonlinear geometry of the state space of quantum games, we show that only pure quantum equilibria can be stable and attracting under FTQL while, as a partial converse, pure equilibria that satisfy a certain "variational stability" condition are always attracting. Finally, we show that the FTQL dynamics are Poincaré recurrent in quantum min-max games, extending in this way a very recent result for the quantum replicator dynamics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.