Influence blocking games have been used to model adversarial domains with a social component, such as counterinsurgency. In these games, a mitigator attempts to minimize the efforts of an influencer to spread his agenda across a social network. Previous work has assumed that the influence graph structure is known with certainty by both players. However, in reality, there is often significant information asymmetry between the mitigator and the influencer. We introduce a model of this information asymmetry as a two-player zero-sum Bayesian game. Nearly all past work in influence maximization and social network analysis suggests that graph structure is fundamental in strategy generation, leading to an expectation that solving the Bayesian game exactly is crucial. Surprisingly, we show through extensive experimentation on synthetic and real-world social networks that many common forms of uncertainty can be addressed near-optimally by ignoring the vast majority of it and simply solving an abstracted game with a few randomly chosen types. This suggests that optimal strategies of games that do not model the full range of uncertainty in influence blocking games are typically robust to uncertainty about the influence graph structure. I INTRODUCTIONSocial contagion has long been of great interest in the literature on marketing, the spread of rumors, and, recently, in the context of Arab Spring [1][2][3]. Our specific focus is on counterinsurgency, which we view as a competition for the support of local leaders. Counterinsurgency can be modeled as a game with two strategic players, the insurgents and the peacekeepers, in which the insurgents aim to spread their views, unrest, etc. among the local population, while the peacekeepers wish to minimize the resulting contagion by engaging in their own influence campaign [4][5][6]. The key computational question we address is: given limited resources, how to select which of the local leaders to influence to minimize the global impact of the insurgency.These 'influence blocking' games have received recent attention in the security games literature [6], where they have been modeled using graphs with nodes representing the tribal leaders and edges representing possible transmission of influence. However, this line of work has assumed that full information about network structure is available to both players. In practice, informational challenges abound in counterinsurgency, where the insurgents are typically an indigenous group that has an informational advantage, and the mitigators are often uncertain about the the social network [4].We model counterinsurgency as an influence blocking game with asymmetric information. Specifically, we assume that the influencer (an insurgent group) has perfect knowledge of the influence graph structure, while the mitigator is uncertain about it. In the resulting Bayesian game, an influencer type is a particular instantiation of the influence graph, and the mitigator must reason over the distribution over these graphs (i.e., influencer types...
Leveraging machine-learning (ML) techniques for compiler optimizations has been widely studied and explored in academia. However, the adoption of ML in general-purpose, industry strength compilers has yet to happen.We propose MLGO 1 , a framework for integrating ML techniques systematically in an industrial compiler -LLVM. As a case study, we present the details and results of replacing the heuristics-based inlining-for-size optimization in LLVM with machine learned models. To the best of our knowledge, this work is the first full integration of ML in a complex compiler pass in a real-world setting. It is available in the main LLVM repository.We use two different ML algorithms: Policy Gradient and Evolution Strategies, to train the inlining-for-size model, and achieve up to 7% size reduction, when compared to state of the art LLVM -Oz. The same model, trained on one corpus, generalizes well to a diversity of real-world targets, as well as to the same set of targets after months of active development. This property of the trained models is beneficial to deploy ML techniques in real-world settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.