Graph learning based collaborative filtering (GLCF), which is built upon the message passing mechanism of graph neural networks (GNNs), has received great recent attention and exhibited superior performance in recommender systems. However, although GNNs can be easily compromised by adversarial attacks as shown by the prior work, little attention has been paid to the vulnerability of GLCF. Questions like can GLCF models be easily fooled just as GNNs remain largely unexplored. In this article, we propose to study the vulnerability of GLCF. Specifically, we first propose an adversarial attack against CLCF. Considering the unique challenges of attacking GLCF, we propose to adopt the greedy strategy in searching for the local optimal perturbations, and design a reasonable attacking utility function to handle the non-differentiable ranking-oriented metrics. Next, we propose a defense to robustify GCLF. The defense is based on the observation that attacks usually introduce suspicious interactions into the graph so as to manipulate the message passing process. We then propose to measure the suspicious score of each interaction and further reduce the message weight of suspicious interactions. We also give a theoretical guarantee of its robustness. Experimental results on three benchmark datasets show the effectiveness of both our attack and defense.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.