2021 IEEE Globecom Workshops (GC Wkshps) 2021
DOI: 10.1109/gcwkshps52748.2021.9682164
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Approaches for Mobile Trajectory Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…Recent studies have focused on utilizing DL-based multi-output models for user mobile path prediction, such as encoder-decoder model that is built upon Seq2Seq framework [15], [34] and multimodal recurrent framework with multi-output approach [43]. A recent study utilizes encoder-decoder model architecture with Gated Recurrent Unit (GRU) to predict mobile trajectories [44]. It enhances the prediction accuracy by implementing the attention block in the encoder at the cost of elevated computation.…”
Section: Multi-output Approach For Multi-step Predictionmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent studies have focused on utilizing DL-based multi-output models for user mobile path prediction, such as encoder-decoder model that is built upon Seq2Seq framework [15], [34] and multimodal recurrent framework with multi-output approach [43]. A recent study utilizes encoder-decoder model architecture with Gated Recurrent Unit (GRU) to predict mobile trajectories [44]. It enhances the prediction accuracy by implementing the attention block in the encoder at the cost of elevated computation.…”
Section: Multi-output Approach For Multi-step Predictionmentioning
confidence: 99%
“…For example, AP ID 3 is converted into vector [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] and 5 into vector [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]. This representation of elements is adapted for the comparison models, GRU-Attention (GRU-ATTN) model [44], and Transformer Network (TN) model [45].…”
Section: B Data Collection and Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…The number of times that the learning algorithm will work through the entire training dataset results of three, five, and seven steps ahead predictions for SFI model and ED model are depicted in Fig. 8 as a function of input sequence length, and compared against two stateof-the-art models GRU-ATTN [44], and TN [45]. For the comprehensive analyses, the results of CMD and OMD are separately illustrated in Figs.…”
Section: Epochmentioning
confidence: 99%