Graph Neural Networks (GNNs) are neural networks designed for processing graph data. There has been a lot of focus on recent developments of graph neural networks concerning the theoretical properties of the models, in particular with respect to their mathematical expressiveness, that is, to map different graphs or nodes to different outputs; or, conversely, to map permutations of the same graph to the same output. In this paper, we review the mathematical expressiveness results of graph neural networks. We find that according to their mathematical properties, the GNNs that are more expressive than the standard graph neural networks can be divided into two categories: the models that achieve the highest level of expressiveness, but require intensive computation; and the models that improve the expressiveness of standard graph neural networks by implementing node identification and substructure awareness. Additionally, we present a comparison of existing architectures in terms of their expressiveness. We conclude by discussing the future lines of work regarding the theoretical expressiveness of graph neural networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.