2023
DOI: 10.22541/essoar.167870317.70650422/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Synthesis-Style Pre-trained Auto-Correlation Transformer: A Zero-shot Learner on Long Ionospheric TEC Series Forecasting

Abstract: In this paper, we present a novel approach to improve the accuracy of TEC prediction through data augmentation. Prior works that adopt various deep-learning-based approaches suffer from two major problems. First, from a deep model perspective: LSTM models exhibit low performance on long-term data dependency, while self-attention-based methods ignore the temporal nature of time series, which results in an information utilization bottleneck. Second, the existing TEC actual data is limited and existing generative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 3 publications
(4 reference statements)
0
1
0
Order By: Relevance
“…Moreover, some studies have utilized Convolutional Long Short‐Term Memory neural networks, which incorporate spatial information into LSTM, to predict ionospheric parameters (Liu et al., 2022; Luo et al., 2023; Xia et al., 2022). Some studies used the Transformer to predict TEC, and by increasing the depth of the model, the prediction accuracy of the model was further improved (Shih et al., 2024; Yuan et al., 2023). In addition, nowcasting technique has been improved by Chen et al.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, some studies have utilized Convolutional Long Short‐Term Memory neural networks, which incorporate spatial information into LSTM, to predict ionospheric parameters (Liu et al., 2022; Luo et al., 2023; Xia et al., 2022). Some studies used the Transformer to predict TEC, and by increasing the depth of the model, the prediction accuracy of the model was further improved (Shih et al., 2024; Yuan et al., 2023). In addition, nowcasting technique has been improved by Chen et al.…”
Section: Introductionmentioning
confidence: 99%