2022
DOI: 10.48550/arxiv.2205.13492
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sparse Graph Learning for Spatiotemporal Time Series

Abstract: Outstanding achievements of graph neural networks for spatiotemporal time series prediction show that relational constraints introduce a positive inductive bias into neural forecasting architectures. Often, however, the relational information characterizing the underlying data generating process is unavailable; the practitioner is then left with the problem of inferring from data which relational graph to use in the subsequent processing stages. We propose novel, principled -yet practicalprobabilistic methods … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…Models of this type are fully inductive, in the sense that they can be used to make predictions for networks and time windows different from those they have been trained on, provided a certain level of similarity (e.g., homogenous sensors) between source and target node sets [15]. Among the different implementations of this general framework, we can distinguish between time-then-space (TTS) and time-and-space (T&S) models by following the terminology of previous works [16,17]. Specifically, in TTS models the sequence of representations h i,0 ≤t is encoded by a sequence model, e.g., an RNN, before propagating information along the spatial dimension through message passing (MP) [16].…”
Section: Forecasting With Stgnnsmentioning
confidence: 99%
“…Models of this type are fully inductive, in the sense that they can be used to make predictions for networks and time windows different from those they have been trained on, provided a certain level of similarity (e.g., homogenous sensors) between source and target node sets [15]. Among the different implementations of this general framework, we can distinguish between time-then-space (TTS) and time-and-space (T&S) models by following the terminology of previous works [16,17]. Specifically, in TTS models the sequence of representations h i,0 ≤t is encoded by a sequence model, e.g., an RNN, before propagating information along the spatial dimension through message passing (MP) [16].…”
Section: Forecasting With Stgnnsmentioning
confidence: 99%