2021 55th Asilomar Conference on Signals, Systems, and Computers 2021
DOI: 10.1109/ieeeconf53345.2021.9723298
|View full text |Cite
|
Sign up to set email alerts
|

Transferable Graph Neural Networks on Large-Scale Stochastic Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…In future work, we will investigate more general classes of cellular sheaves that approximate unions of manifolds (perhaps representing multiple classes) or, more generally, stratified spaces [42], [43]. We believe our perspective on tangent bundle neural networks could shed further light on challenging problems in graph neural networks such as heterophily [29], over-squashing [44], or transferability [45]- [47]. Finally,we plan to tackle more sophisticated tasks such as robot coordination with our proposed architectures.…”
Section: Discussionmentioning
confidence: 99%
“…In future work, we will investigate more general classes of cellular sheaves that approximate unions of manifolds (perhaps representing multiple classes) or, more generally, stratified spaces [42], [43]. We believe our perspective on tangent bundle neural networks could shed further light on challenging problems in graph neural networks such as heterophily [29], over-squashing [44], or transferability [45]- [47]. Finally,we plan to tackle more sophisticated tasks such as robot coordination with our proposed architectures.…”
Section: Discussionmentioning
confidence: 99%
“…It is well-known that GNN models have an unprecedented capability to generalize over graph-structured data [38], [39]. In the context of scaling to larger graphs, it is also known that GNNs keep good generalization capabilities as long as the spectral properties of graphs are similar to those seen during training [40]. In our particular case, the internal message passing architecture of RouteNet-F generalizes accurately to graphs with similar structures (e.g., a similar number of queues at output ports, or a similar number of flows aggregated in queues).…”
Section: Scaling To Larger Network: Scale-independent Featuresmentioning
confidence: 99%
“…AGGR{ • } indicates a generic permutation invariant aggregation function, while N (i) refers to the set of neighbors of node i, each associated to an edge with weight a ji . Models of this type are fully inductive, in the sense that they can be used to make predictions for networks and time windows different from those they have been trained on, provided a certain level of similarity (e.g., homogenous sensors) between source and target node sets [15]. Among the different implementations of this general framework, we can distinguish between time-then-space (TTS) and time-and-space (T&S) models by following the terminology of previous works [16,17].…”
Section: Forecasting With Stgnnsmentioning
confidence: 99%