2021
DOI: 10.1145/3451356
|View full text |Cite
|
Sign up to set email alerts
|

Graph Convolutional Network-based Model for Incident-related Congestion Prediction: A Case Study of Shanghai Expressways

Abstract: Traffic congestion has become a significant obstacle to the development of mega cities in China. Although local governments have used many resources in constructing road infrastructure, it is still insufficient for the increasing traffic demands. As a first step toward optimizing real-time traffic control, this study uses Shanghai Expressways as a case study to predict incident-related congestions. Our study proposes a graph convolutional network-based model to identify correlations in multi-dimensional sensor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…It then combines the two using Kalman Filtering. More recently, Transformer blocks have been identified to provide accurate temporal models for forecasting, while additionally enabling parallelized computation for efficient training [Vaswani et al, 2017;Tuli et al, 2022]. A recent work, TSE-SC [Cai et al, 2020], sequentially infers the input using a GCNs and then a Transformer encoder.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…It then combines the two using Kalman Filtering. More recently, Transformer blocks have been identified to provide accurate temporal models for forecasting, while additionally enabling parallelized computation for efficient training [Vaswani et al, 2017;Tuli et al, 2022]. A recent work, TSE-SC [Cai et al, 2020], sequentially infers the input using a GCNs and then a Transformer encoder.…”
Section: Related Workmentioning
confidence: 99%
“…For a graph G of a road network, at each timestep t, we encode the input window W (t) to perform both permutations of spatial and temporal inferences as described in Section 1. For spatial inference, we use graph attention networks (GAT) [Veličković et al, 2017] and for temporal inference we use transformer blocks [Vaswani et al, 2017]. Spatial-Inference.…”
Section: Radnet Modelmentioning
confidence: 99%
See 3 more Smart Citations