2022
DOI: 10.1007/978-981-19-6135-9_21
|View full text |Cite
|
Sign up to set email alerts
|

An Ensemble Deep Learning Model Based on Transformers for Long Sequence Time-Series Forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Recently, the transformer performed the best in modeling sequential data, becoming a popular model for ML. [25][26][27][28][29] Moreover, the transformer enables to capture global dependencies between feature and label by adopting self-attention, which has a higher parallelism and computational efficiency compared to recurrent neural network. 30 Based on these merits, it is powerful and promising to compensate the NLI for long-haul and large-capacity optical transmission system.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the transformer performed the best in modeling sequential data, becoming a popular model for ML. [25][26][27][28][29] Moreover, the transformer enables to capture global dependencies between feature and label by adopting self-attention, which has a higher parallelism and computational efficiency compared to recurrent neural network. 30 Based on these merits, it is powerful and promising to compensate the NLI for long-haul and large-capacity optical transmission system.…”
Section: Introductionmentioning
confidence: 99%