2021
DOI: 10.32604/jbd.2021.016993
|View full text |Cite
|
Sign up to set email alerts
|

Survey on Research of RNN-Based Spatio-Temporal Sequence Prediction Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(21 citation statements)
references
References 22 publications
0
14
0
Order By: Relevance
“…Compared with CNN, the biggest feature of Recurrent Neural Network (RNN) and LSTM is that, by adding a connection between the current moment and the next moment in hidden layer, the output of current moment can affect the next moment. The RNN has been used by some method to analyze spatio-temporal sequence [27]. Thus, RNN and LSTM might be better at capturing the temporal characteristics of sequence data, which is more consistent with the PoI change prediction described in Section 2.…”
Section: Cnnmentioning
confidence: 69%
“…Compared with CNN, the biggest feature of Recurrent Neural Network (RNN) and LSTM is that, by adding a connection between the current moment and the next moment in hidden layer, the output of current moment can affect the next moment. The RNN has been used by some method to analyze spatio-temporal sequence [27]. Thus, RNN and LSTM might be better at capturing the temporal characteristics of sequence data, which is more consistent with the PoI change prediction described in Section 2.…”
Section: Cnnmentioning
confidence: 69%
“…In the last decade, Recurrent Neural Networks (RNNs) have gained widespread attention and developed rapidly due to their powerful and effective modeling capabilities [19]. However, traditional RNN suffers from short term memory and vanishing gradient problems [20,21,22] which makes it difficult to capture long term dependencies, an important factor to capture historical relevant data over long time series to accurately predict the future weather. In the world of RNN, the Long-Short Term Memory (LSTM) based RNN overcomes the drawbacks of traditional RNN and formulates long-term dependencies between training samples [2,23,24,25].…”
Section: Related Workmentioning
confidence: 99%
“…(1) Data sorting: After splitting the original video set into multiple frame level sequences {frame 1 , frame 2 , …, frame n }, the video frames in the frame level sequences [28] are unevenly partitioned to obtain multiple chunks of size x n à y n which are non-overlapping with each other (2) Motion feature extraction: As single optical flow feature extraction often suffers from information omission, the inter frame difference method is used to obtain difference images by subtracting the previous frame from the next frame, and then the obtained images are binarized to obtain the motion target positions. Similarly, the same location chunks corresponding to adjacent video frames are input into Flownet2 [29] to obtain the optical flow features of each chunk in the video frame.…”
Section: Overall Structure and Processing Flowmentioning
confidence: 99%