Graph Neural Networks have achieved immense success for node classification with its power to explore the topological structure in graph data across many domains including social media, Ecommerce, and FinTech. However, recent studies show that GNNs are vulnerable to attacks aimed at adversely impacting their performance, e.g., on the node classification task. Existing studies of adversarial attacks on GNN focus primarily on manipulating the connectivity between existing nodes, a task that requires greater effort on the part of the attacker in real-world applications. In contrast, it is much more expedient on the part of the attacker to inject adversarial nodes, e.g., fake profiles with forged links, into existing graphs so as to reduce the performance of the GNN in classifying existing nodes.Hence, we consider a novel form of node injection poisoning attacks on graph data. We model the key steps of a node injection attack, e.g., establishing links between the injected adversarial nodes and other nodes, choosing the label of an injected node, etc. by a Markov Decision Process. We propose a novel reinforcement learning method for Node Injection Poisoning Attacks (NIPA), to sequentially modify the labels and links of the injected nodes, without changing the connectivity between existing nodes. Specifically, we introduce a hierarchical Q-learning network to manipulate the labels of the adversarial nodes and their links with other nodes in the graph, and design an appropriate reward function to guide the reinforcement learning agent to reduce the node classification performance of GNN.The results of our experiments show that NIPA is consistently more effective than the baseline node injection attack methods for poisoning graph data used to train GNN on several benchmark data sets. We further show that the graphs poisoned by NIPA are statistically similar to the original (clean) graphs, thus enabling the attacks to evade detection.
The complex morphologies exhibited by spatially confined thin objects have long challenged human efforts to understand and manipulate them, from the representation of patterns in draped fabric in Renaissance art to current day efforts to engineer flexible sensors that conform to the human body. We introduce a theoretical principle, broadly generalizing Euler's elastica -a core concept of continuum mechanics that invokes the energetic preference of bending over straining a thin solid object and has been widely applied to classical and modern studies of beams and rods. We define a class of geometrically incompatible confinement problems, whereby the topography imposed on a thin solid body is incompatible with its intrinsic ("target") metric and, as a consequence of Gauss' Theorema Egregium, induces strain. Focusing on a prototypical example of a sheet attached to a spherical substrate, numerical simulations and analytical study demonstrate that the mechanics is governed by a principle, which we call the "Gauss-Euler elastica". This emergent rule states that -despite the unavoidable strain in such an incompatible confinement -the ratio between the energies stored in straining and bending the solid may be arbitrarily small. The Gauss-Euler elastica underlies a theoretical framework that greatly simplifies the daunting task of solving the highly nonlinear equations that describe thin solids at mechanical equilibrium. This development thus opens new possibilities for attacking a broad class of phenomena governed by the coupling of geometry and mechanics. arXiv:1809.06919v2 [cond-mat.soft]
Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial a acks are criticized. Prior studies shows that using unnoticeable modi cations on graph topology or nodal features can signi cantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning a ack and several e orts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning a acks by exploring clean graphs. Speci cally, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower a ention coe cients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning a acks on graphs. ACM Reference format:
Manuscript lick here to view linked References significance for diameters smaller than ∼ 1 nm. The associated reduction of their collapse pressure is attributed to the discretization of the elastic compliances around the circumference of the tubes.
Graph Convolutional Networks (GCNs) show promising results for semi-supervised learning tasks on graphs, thus become favorable comparing with other approaches. Despite the remarkable success of GCNs, it is difficult to train GCNs with insufficient supervision. When labeled data are limited, the performance of GCNs becomes unsatisfying for low-degree nodes. While some prior work analyze successes and failures of GCNs on the entire model level, profiling GCNs on individual node level is still underexplored. In this paper, we analyze GCNs in regard to the node degree distribution. From empirical observation to theoretical proof, we confirm that GCNs are biased towards nodes with larger degrees with higher accuracy on them, even if high-degree nodes are underrepresented in most graphs. We further develop a novel Self-Supervised-Learning Degree-Specific GCN (SL-DSGCN) that mitigate the degree-related biases of GCNs from model and data aspects. Firstly, we propose a degree-specific GCN layer that captures both discrepancies and similarities of nodes with different degrees, which reduces the inner model-aspect biases of GCNs caused by sharing the same parameters with all nodes. Secondly, we design a self-supervised-learning algorithm that creates pseudo labels with uncertainty scores on unlabeled nodes with a Bayesian neural network. Pseudo labels increase the chance of connecting to labeled neighbors for low-degree nodes, thus reducing the biases of GCNs from the data perspective. Uncertainty scores are further exploited to weight pseudo labels dynamically in the stochastic gradient descent for SL-DSGCN. Experiments on three benchmark datasets show SL-DSGCN not only outperforms state-of-the-art selftraining/self-supervised-learning GCN methods, but also improves GCN accuracy dramatically for low-degree nodes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.