2021
DOI: 10.48550/arxiv.2103.01043
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Persistent Message Passing

Heiko Strathmann,
Mohammadamin Barekatain,
Charles Blundell
et al.

Abstract: Graph neural networks (GNNs) are a powerful inductive bias for modelling algorithmic reasoning procedures and data structures. Their prowess was mainly demonstrated on tasks featuring Markovian dynamics, where querying any associated data structure depends only on its latest state. For many tasks of interest, however, it may be highly beneficial to support efficient data structure queries dependent on previous states. This requires tracking the data structure's evolution through time, placing significant press… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…However, it quickly became apparent that it is not enough to just train any GNN-for many algorithmic tasks, careful attention is required. Several papers illustrated special cases of GNNs that align with sequential algorithms (Veličković et al, 2019), linearithmic sequence processing (Freivalds et al, 2019), physics simulations (Sanchez-Gonzalez et al, 2020), iterative algorihtms (Tang et al, 2020), data structures (Veličković et al, 2020) or auxiliary memory (Strathmann et al, 2021). Some explanations for this lack of easy generalisation have arisen-we now have both geometric (Xu et al, 2020) and causal (Bevilacqua et al, 2021) views into how better generalisation can be achieved.…”
Section: Introductionmentioning
confidence: 99%
“…However, it quickly became apparent that it is not enough to just train any GNN-for many algorithmic tasks, careful attention is required. Several papers illustrated special cases of GNNs that align with sequential algorithms (Veličković et al, 2019), linearithmic sequence processing (Freivalds et al, 2019), physics simulations (Sanchez-Gonzalez et al, 2020), iterative algorihtms (Tang et al, 2020), data structures (Veličković et al, 2020) or auxiliary memory (Strathmann et al, 2021). Some explanations for this lack of easy generalisation have arisen-we now have both geometric (Xu et al, 2020) and causal (Bevilacqua et al, 2021) views into how better generalisation can be achieved.…”
Section: Introductionmentioning
confidence: 99%
“…Lastly, there are representational issues associated with dynamically allocated memory-it may be unclear what is the best way to represent the internal memory storage and its usage in algorithm trajectories. One example of the ambiguity is in asking whether the algorithm executor should start with a "scratch space" defined by the space complexity of the problem that gets filled up, or dynamically generate such space 5 (Strathmann et al, 2021). As such, we for now exclude all algorithms that require allocating memory which cannot be directly attached to the set of objects provided at input time.…”
Section: Clrs Algorithmic Reasoning Benchmarkmentioning
confidence: 99%