2018
DOI: 10.1016/j.neucom.2018.05.090
|View full text |Cite
|
Sign up to set email alerts
|

Period-aware content attention RNNs for time series forecasting with missing values

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 94 publications
(54 citation statements)
references
References 16 publications
0
41
0
Order By: Relevance
“…The attention mechanism in deep learning draws on the way of human's attention, and it has shown its effectiveness in lots of fields and applications, such as computer vision [33], natural language processing [34], [35], and prediction [36], [37]. Specifically, it is a weighted sum strategy that can automatically and adaptively analyze which part of data or information in the input is more significant.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…The attention mechanism in deep learning draws on the way of human's attention, and it has shown its effectiveness in lots of fields and applications, such as computer vision [33], natural language processing [34], [35], and prediction [36], [37]. Specifically, it is a weighted sum strategy that can automatically and adaptively analyze which part of data or information in the input is more significant.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…RNN is a type of feed-forward neural network. It is simply a network type that returns the value output from the neural network back to the input [20][21][22]. In this way, the output value of the last time point is transmitted back to the neural network, so that the weight calculation of each time point of the network model is related to the content of the previous time point, which means that the neural network is included.…”
Section: Recurrent Neural Network (Rnn)mentioning
confidence: 99%
“…Compared with LSTM, GRU has a faster convergence speed and no difference in accuracy. However, when the input time series is long, RNN series networks such as LSTM and GRU are prone to lose sequence information and it is difficult to model the structure information between data, which affects the accuracy of the model [32]. e attention mechanism is a resource allocation mechanism, which can assign different weights to input features so that the features containing important information will not disappear with the increase of step size, highlight the influence of more important information, and make the model easier to learn the long-distance interdependence in the sequence [33].…”
Section: Introductionmentioning
confidence: 99%