Proceedings of the 31st ACM International Conference on Information &Amp; Knowledge Management 2022
DOI: 10.1145/3511808.3557540
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Graph Spatial-Temporal Transformer Network for Traffic Forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(13 citation statements)
references
References 12 publications
0
13
0
Order By: Relevance
“…The third combination is a parallel encoding of the DG by independent transformer and GNN layers, followed by a combination of their encoded hidden states [120], merging the strengths of both layers. Additionally, some dynamic graph models [73], [98], [99], [122], [123], [125], [127] use transformers exclusively as graph encoders, exploiting the self-attention mechanism for node hidden states propagation without relying on traditional GNN architectures.…”
Section: ) Combination Of Transformer With Dgnnsmentioning
confidence: 99%
See 3 more Smart Citations
“…The third combination is a parallel encoding of the DG by independent transformer and GNN layers, followed by a combination of their encoded hidden states [120], merging the strengths of both layers. Additionally, some dynamic graph models [73], [98], [99], [122], [123], [125], [127] use transformers exclusively as graph encoders, exploiting the self-attention mechanism for node hidden states propagation without relying on traditional GNN architectures.…”
Section: ) Combination Of Transformer With Dgnnsmentioning
confidence: 99%
“…The first method is the embedding based on the time-related index of nodes or edges [98], [100], [120], [124], [127]. The second method utilizes (global) graph spectral information [99], such as the Laplacian or its eigenvectors. The third method relies on spatial domain information or local structures, such as embedding the node degrees or random walks [73], [99], [101], [122], [123], [127], [139].…”
Section: ) Positional Encoding (Pe) On Dynamic Graphsmentioning
confidence: 99%
See 2 more Smart Citations
“…Traffic Transformer [27] designs a global encoder and a global-local encoder to extract global spatial features and local spatial features respectively. To model the time delay of spatial information propagation, PDFormer [28] proposes a traffic delay-aware feature transformation module and adaptive STTN [29] proposes local spatialtemporal graphs that focus on 1-hop spatial neighbors. In this paper, the CGT model employs counterfactuals to efficiently improve TFP interpretability.…”
Section: Related Work a Traffic Flow Predictionmentioning
confidence: 99%