A popular formalism for multiagent control applies tools from game theory, casting a multiagent decision problem as a cooperation-style game in which individual agents make local choices to optimize their own local utility functions in response to the observable choices made by other agents.When the system-level objective is submodular maximization, it is known that if every agent can observe the action choice of all other agents, then all Nash equilibria of a large class of resulting games are within a factor of 2 of optimal; that is, the price of anarchy is 1/2. However, little is known if agents cannot observe the action choices of other relevant agents. To study this, we extend the standard game-theoretic model to one in which a subset of agents either become blind (unable to observe others' choices) or isolated (blind, and also invisible to other agents), and we prove exact expressions for the price of anarchy as a function of the number of compromised agents. When k agents are compromised (in any combination of blind or isolated), we show that the price of anarchy for a large class of utility functions is exactly 1/(2 + k). We then show that if agents use marginal-cost utility functions and at least 1 of the compromised agents is blind (rather than isolated), the price of anarchy improves to 1/(1 + k). We also provide simulation results demonstrating the effects of these observation denials in a dynamic setting.
We study settings in which autonomous agents are designed to optimize a given system-level objective. In typical approaches to this problem, each agent is endowed with a decision-making rule that specifies the agent's choice as a function of relevant information pertaining to the system's state. The choices of other agents in the system comprise a key component of this information. This paper considers a scenario in which the designed decisionmaking rules are not implementable in the realized system due to discrepancies between the anticipated and realized information available to the agents. The focus of this paper is to develop methods by which the agents can preserve system-level performance guarantees in these unanticipated scenarios through local and independent redesigns of their own decision-making rules. First, we show a general impossibility result which states that in general settings, there are no local redesign methodologies that can offer any preservation of system-level performance guarantees, even when the affected agents satisfy an inconsequentiality criterion. However, we then show that when system-level
We study settings in which autonomous agents are designed to optimize a given system-level objective. In typical approaches to this problem, each agent is endowed with a decision-making rule that specifies the agent’s choice as a function of relevant information pertaining to the system’s state. The choices of other agents in the system comprise a key component of this information. This paper considers a scenario in which the designed decisionmaking rules are not implementable in the realized system due to discrepancies between the anticipated and realized information available to the agents. The focus of this paper is to develop methods by which the agents can preserve system-level performance guarantees in these unanticipated scenarios through local and independent redesigns of their own decision-making rules. First, we show a general impossibility result which states that in general settings, there are no local redesign methodologies that can offer any preservation of system-level performance guarantees, even when the affected agents satisfy an inconsequentiality criterion. However, we then show that when system-level objectives are submodular, local redesigns of utility functions do exist which allow nominal performance guarantees to degrade gracefully as information is denied to agents. That is, in these submodular settings, agents can adapt to informational inconsistencies independently without incurring much loss in terms of system-level performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.