2021
DOI: 10.1609/aaai.v35i12.17325
|View full text |Cite
|
Sign up to set email alerts
|

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Abstract: Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
948
0
5

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 2,277 publications
(957 citation statements)
references
References 20 publications
4
948
0
5
Order By: Relevance
“…The following development in the time series was caused by the attention mechanism. With the help of Informer (Zhou et al 2021), DA-RNN (Qin et al 2017), the precision of the attention mechanism was improved. Graph neural networks are new models that convert dense connections to sparse graph structures, fitting information in explicit or implicitly graph-structured data, such as MTGNN (Wu et al 2020).…”
Section: Deep Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The following development in the time series was caused by the attention mechanism. With the help of Informer (Zhou et al 2021), DA-RNN (Qin et al 2017), the precision of the attention mechanism was improved. Graph neural networks are new models that convert dense connections to sparse graph structures, fitting information in explicit or implicitly graph-structured data, such as MTGNN (Wu et al 2020).…”
Section: Deep Learning Methodsmentioning
confidence: 99%
“…In this chapter, we will analyze two datasets with contrasting characteristics and their extension tasks, validate the predictive capability of PureTS, and compare the running times and parameter sizes of several models horizontally. The eight publicly available benchmark datasets are split into short sequence prediction and long sequence prediction in accordance with the previous experimental settings (Zhang et al 2022;Zhou et al 2021;Lai et al 2018) Reformer (Kitaev, Kaiser, and Levskaya 2020). Informer (Zhou et al 2021).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In particular, time series forecasting models using Transformer, developed in the field of a natural language preocessing [54], achieve high prediction performance [12,31,33]. For example, Informer [58], one of the Transformer-based models, achieves a successful outcome in long-ahead forecasting. Deep learning-based models guarantee high forecasting performance; however, they are still problematic in terms of their interpretability.…”
Section: Time Series Forecasting and Modelingmentioning
confidence: 99%
“…• Informer [58], which is a transformer-based model based on ProbSparse self-attention and self-attention distilling, is known for its remarkable efficiency in long time series forecasting. We select a two-layer stack for both encoder and decoder and set 26 (half of the prediction sequence length) as the token length of the decoder.…”
Section: Optimizationmentioning
confidence: 99%