2022
DOI: 10.1109/tits.2021.3102983
|View full text |Cite
|
Sign up to set email alerts
|

Learning Dynamic and Hierarchical Traffic Spatiotemporal Features With Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
45
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 129 publications
(45 citation statements)
references
References 33 publications
0
45
0
Order By: Relevance
“…Multiself-Attention Machine Module. Te attention mechanism was originally used to solve the problem of longterm sequence dependence in machine translation, in which the performance of machine translation decreased signifcantly with the increase in the sentence length [16]. It has been widely used in the processing of various time series data.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Multiself-Attention Machine Module. Te attention mechanism was originally used to solve the problem of longterm sequence dependence in machine translation, in which the performance of machine translation decreased signifcantly with the increase in the sentence length [16]. It has been widely used in the processing of various time series data.…”
Section: Methodsmentioning
confidence: 99%
“…In such cases, deep graph neural networks (GNNs) are superior to extracting features from transportation networks. Research studies mainly focus on the graph convolution network [16,[38][39][40] and spectral convolution network [41]. Te graph convolution neural network builds the trafc graph based on the physical network topology and defnes a graph convolution neural network to capture spatial features.…”
Section: Advances In Civil Engineeringmentioning
confidence: 99%
See 1 more Smart Citation
“…Nicolas et al [3] proposed DETR, which applied a transformer to object detection task. Yan et al use a transformer to predict long-term traffic flow [33]. Cai et al [34] use a transformer to capture the spatial dependency for continuity and periodicity time series.…”
Section: Transformers Structurementioning
confidence: 99%
“…Yan et al [45] improved the transformer structure by introducing a global decoder and a global-local decoder respectively. The multi-head attention is used to extract nonlocal features, and the masked multi-head attention is focused on extracting local features.…”
Section: Differences Between Sdscnn and Existing Workmentioning
confidence: 99%