2020
DOI: 10.48550/arxiv.2002.04025
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can Graph Neural Networks Count Substructures?

Zhengdao Chen,
Lei Chen,
Soledad Villar
et al.

Abstract: The ability to detect and count certain substructures in graphs is important for solving many tasks on graph-structured data, especially in the contexts of computational chemistry and biology as well as social network analysis. Inspired by this, we propose to study the expressive power of graph neural networks (GNNs) via their ability to count attributed graph substructures, extending recent works that examine their power in graph isomorphism testing and function approximation. We distinguish between two types… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
51
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(52 citation statements)
references
References 41 publications
1
51
0
Order By: Relevance
“…Recently, there have been several research attempts [39]- [41] basis. These methods focus on counting of isomorphic subgraphs or producing approximate matching results, which are different from the target in this paper, i.e., generating matching orders to find exact subgraph mappings.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, there have been several research attempts [39]- [41] basis. These methods focus on counting of isomorphic subgraphs or producing approximate matching results, which are different from the target in this paper, i.e., generating matching orders to find exact subgraph mappings.…”
Section: Related Workmentioning
confidence: 99%
“…They make predictions by learning graph representations via the message-passing mechanism, where the representation h u of each node u ∈ V is updated by aggregating messages from its neighbors, denoted as N (u). The message passing is iteratively applied to all nodes in V, so each node can collect messages from its multi-hop neighbors and produce structure-aware representations (Chen et al, 2020). Given h…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…Then, they are aggregated together with v({0}) following Equation 7to form v * τ ({0}). For graph data, the surplus allocation approach has two advantages over the all-possiblecoalition aggregation used in the Shapley value: (1): the aggregated payoff in each v * τ is structure-aware like the representations learned by GNNs (Chen et al, 2020), and…”
Section: Connecting Gnns and Hn Surplus Allocation Through The Messag...mentioning
confidence: 99%
See 1 more Smart Citation
“…For example, [34,49] show that the capability of distinguishing different graphs of MPGNNs is upper-bounded by the 1-Weisfeiler-Lehman Isomorphism Test (1-WL Test), which is known unable to distinguish many common substructures (examples in [39]). In addition, [2,9] show that MPGNNs are unable to count or detect substructures with 3 or more nodes. Both theoretical results uncover crucial limitations of MPGNNs, as graph substructures are widely recognized as indicative in various complex networks.…”
Section: Introductionmentioning
confidence: 99%