2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533319
|View full text |Cite
|
Sign up to set email alerts
|

Unified Spatio-Temporal Modeling for Traffic Forecasting using Graph Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…Among the deep learning models, GRU can only characterise the temporal trend of epidemics, and GCN can only depict spatial relations, whereas other models can capture spatial and temporal variations simultaneously. SIR [23]: The susceptible‐infectious‐recovered model. SEIR [24]: The susceptible‐exposed‐infectious‐recovered model. Spatial lag model (SLM) [33]: A linear regression model includes a spatial lag term of the dependent variable and individual location fixed effects. Spatial error model (SEM) [33]: A linear regression model includes a spatial error term and individual location fixed effects. GRU [30]: A gated recurrent neural network in which two gates are used to capture long‐distance dependencies. Graph convolutional networks (GCN) [34]: In the basic GCN, a binary adjacency matrix is used to capture spatial dependency. STGCN [15]: Spatio–temporal graph convolutional network, first using convolution in both temporal and spatial modules. Graph WaveNet (GWNet) [16]: This captures spatial dependency by constructing an adaptive spatial adjacency matrix and modelling the long‐term temporal dependence based on the dilated casual convolution. Cola‐GNN [6]: Cross‐location attention‐based graph neural networks that design a dynamic location‐aware attention mechanism to capture spatial dependency. Furthermore, the model has a temporally dilated convolution module that captures both short‐ and long‐temporal dependencies at various locations. STSGCN [27]: Spatio–temporal synchronous graph convolutional networks design a spatio–temporal adjacency matrix based on three consecutive time slices to capture localised spatial–temporal correlations. USTGCN [19]: A unified spatio–temporal graph convolution network model that builds a global binary spatio–temporal matrix to capture all spatio–temporal adjacency relations simultaneously. Ada‐STNet [20]: An adaptive spatio–temporal graph neural network, which derives the optimal graph structure based on node attributes from both macro and micro aspects and uses a convolution architecture to capture spatial and temporal dependencies separately. …”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Among the deep learning models, GRU can only characterise the temporal trend of epidemics, and GCN can only depict spatial relations, whereas other models can capture spatial and temporal variations simultaneously. SIR [23]: The susceptible‐infectious‐recovered model. SEIR [24]: The susceptible‐exposed‐infectious‐recovered model. Spatial lag model (SLM) [33]: A linear regression model includes a spatial lag term of the dependent variable and individual location fixed effects. Spatial error model (SEM) [33]: A linear regression model includes a spatial error term and individual location fixed effects. GRU [30]: A gated recurrent neural network in which two gates are used to capture long‐distance dependencies. Graph convolutional networks (GCN) [34]: In the basic GCN, a binary adjacency matrix is used to capture spatial dependency. STGCN [15]: Spatio–temporal graph convolutional network, first using convolution in both temporal and spatial modules. Graph WaveNet (GWNet) [16]: This captures spatial dependency by constructing an adaptive spatial adjacency matrix and modelling the long‐term temporal dependence based on the dilated casual convolution. Cola‐GNN [6]: Cross‐location attention‐based graph neural networks that design a dynamic location‐aware attention mechanism to capture spatial dependency. Furthermore, the model has a temporally dilated convolution module that captures both short‐ and long‐temporal dependencies at various locations. STSGCN [27]: Spatio–temporal synchronous graph convolutional networks design a spatio–temporal adjacency matrix based on three consecutive time slices to capture localised spatial–temporal correlations. USTGCN [19]: A unified spatio–temporal graph convolution network model that builds a global binary spatio–temporal matrix to capture all spatio–temporal adjacency relations simultaneously. Ada‐STNet [20]: An adaptive spatio–temporal graph neural network, which derives the optimal graph structure based on node attributes from both macro and micro aspects and uses a convolution architecture to capture spatial and temporal dependencies separately. …”
Section: Methodsmentioning
confidence: 99%
“…USTGCN [19]: A unified spatio–temporal graph convolution network model that builds a global binary spatio–temporal matrix to capture all spatio–temporal adjacency relations simultaneously.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the traffic speed, we used the proposed model to predict 15, 30, and 60 minutes. The compared baseline models contain both traditional HA and ARIMA, along with neural network models such as STGCN [7], DCRNN [41], ASTGCN [52], GWN [42], LSGCN [43], and USTGCN [44].…”
Section: Baselinesmentioning
confidence: 99%
“…Huang et al [43] proposed a new graph attention network, cosAtt, to obtain spatial features through cosAtt and GCN and temporal features through a GLU. Roy et al [44] consider important daily patterns and present-day patterns from traffic data in addition to spatio-temporal characteristics to improve the accuracy of predictions. However, these methods only consider the spatial features based on structure-aware graph embedding information, without considering the location information, so they cannot effectively obtain the spatial features.…”
Section: Introductionmentioning
confidence: 99%