2020
DOI: 10.3390/app11010017
|View full text |Cite
|
Sign up to set email alerts
|

Deep Wide Spatial-Temporal Based Transformer Networks Modeling for the Next Destination According to the Taxi Driver Behavior Prediction

Abstract: This paper uses a neural network approach transformer of taxi driver behavior to predict the next destination with geographical factors. The problem of predicting the next destination is a well-studied application of human mobility, for reducing traffic congestion and optimizing the electronic dispatching system’s performance. According to the Intelligent Transport System (ITS), this kind of task is usually modeled as a multi-class problem. We propose the novel model Deep Wide Spatial-Temporal-Based Transforme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 25 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Given that flow generation is a task for estimating the flows between a group of geographic places based on those locations’ data (such as population, POIs, land use, and distance to other sites), it is a separate task from next-place prediction. Finally, only one study presented a transformer-like architecture, the DWSTTN (Deep Wide Spatio Temporal Transformer Network) [ 33 ]. The DWSTTN uses historical pick-up and drop-off data from taxi companies in Porto and Manhattan to predict a taxi’s next destination.…”
Section: Related Workmentioning
confidence: 99%
“…Given that flow generation is a task for estimating the flows between a group of geographic places based on those locations’ data (such as population, POIs, land use, and distance to other sites), it is a separate task from next-place prediction. Finally, only one study presented a transformer-like architecture, the DWSTTN (Deep Wide Spatio Temporal Transformer Network) [ 33 ]. The DWSTTN uses historical pick-up and drop-off data from taxi companies in Porto and Manhattan to predict a taxi’s next destination.…”
Section: Related Workmentioning
confidence: 99%
“…STSAN [21] has three attention modules, and not only incorporates spatial-temporal factors but also captures patterns of location and user preference change. DWSTTN [22] used a transformer-based model to dynamically capture long-range dependencies. HGMAP [23] combined GCNs and multi-head attention.…”
Section: Neural Network Applications In Trajectory Predictionmentioning
confidence: 99%
“…Transformer [9] entirely relies on the attention mechanism to model the global dependencies of the sequence and breaks through the limitation that RNN cannot be parallelized. Deep Wide Spatio-Temporal Transformer Network (DWSTTN) [36] uses two attention mechanisms to extract relevant information in time and space, respectively. Graph Convolutional Dual-Attentive Networks (GCDAN) [12] design a dual-attention mechanism within and between trajectories and use graph convolution to extract spatial features in the embedding layer.…”
Section: Related Work 21 Mobility Predictionmentioning
confidence: 99%