We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity.We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class TC 0 . Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmoid", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in TC 0 .Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform TC 0 .(2) Q is computable in TC 0 .We mention that, following [2], we allow GNNs with random initialisation to also use a feature known as global readout, which means that in each message-passing round of a GNN computation, the vertices not only receive messages from their neighbours, but the aggregated state of all vertices. There is also a version of Theorem 1.1 for GNNs with global readout.
Related WorkA fundamental result on the expressiveness of GNNs [24,32] states that two graphs are distinguishable by a GNN if and only if they are distinguishable by the 1-dimensional Weisfeiler-Leman (WL) algorithm, a simple combinatorial algorithm originally introduced as a graph isomorphism heuristics [23,31]. This result has had considerable impact on the subsequent development of GNNs, because it provides a yardstick for the expressiveness of GNN extensions (see [25]). Its generalisation to higher-order GNNs and higher-dimensional WL algorithms [24] even gives a hierarchy of increasingly more expressive formalisms against which such extensions can be compared. However, these results relating GNNs and their extensions to the WL algorithm only consider a restricted form of expressiveness, the power to distinguish two graphs. Furthermore, the results are non-uniform, that is, the distinguishing GNNs depend on the input graphs or at least on their size, and the GNNs may be arbitrarily large and deep. Indeed, the GNNs from the construction in [32] may be exponentially large in the graphs they distinguish. Those of [24] are polynomial. Both have recently been improved by [1], mainly showing that the messages only need to contain logarithmically many bits.We are not the first to study the logical expressiveness of GNNs (see [12] for a recent survey). It was proved in [3] that all unary queries definable in the guarded fragment GC of the extension C of first-order logic by counting quantifiers ∃ ≥n x ("there exist at least n vertices x satisfying some formula") are computable by a GNN. The ...