2022
DOI: 10.1016/j.egyr.2022.05.290
|View full text |Cite
|
Sign up to set email alerts
|

M2TNet: Multi-modal multi-task Transformer network for ultra-short-term wind power multi-step forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
7
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Alternatively, deep learning methods have shown promising performance in the field of computer vision, natural language processing and speech recognition [10]. A number of deep learning-based methods have been proposed to predict wind power, such as long short-term memory (LSTM) [11], convolutional neural network (CNN) [12], graph neural network (GNN) [13] and transformer models [14], achieving outstanding prediction performance. neural network (ANN) model to consider both meteorological features and turbine-level signals [15], while Miele et al utilized two LSTM-based modules to combine local information from the turbine's internal operating conditions with future meteorological data from the surrounding area [16].…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, deep learning methods have shown promising performance in the field of computer vision, natural language processing and speech recognition [10]. A number of deep learning-based methods have been proposed to predict wind power, such as long short-term memory (LSTM) [11], convolutional neural network (CNN) [12], graph neural network (GNN) [13] and transformer models [14], achieving outstanding prediction performance. neural network (ANN) model to consider both meteorological features and turbine-level signals [15], while Miele et al utilized two LSTM-based modules to combine local information from the turbine's internal operating conditions with future meteorological data from the surrounding area [16].…”
Section: Introductionmentioning
confidence: 99%
“…As newer and more productive structures [23,24] arise, CLFormer [25] networks optimized by a linear attention mechanism and convolutional embedding have maximized the ability to efficiently extract global features for fault diagnosis. Some architectures, such as VIT [26], exhibit strong fault diagnosis capabilities as vision models.…”
Section: Introductionmentioning
confidence: 99%
“…The CSWin Transformer [28] proposes a cross-shaped window self-attention mechanism in both horizontal and vertical dimensions. Swin-T and MLP are combined in the SwinMLP [24] network, which employs Swin-T for image feature extraction and MLP for feature classification.…”
Section: Introductionmentioning
confidence: 99%
“…Wind speed prediction methods generally use machine learning or statistical approaches [11], without the need to establish physical models related to the actual environment, such as terrain, surface roughness, and meteorological conditions [12]. Statistical methods typically establish mapping relationships between data by learning the patterns of historical wind speed data, including Kalman filtering [13,14], exponential smoothing [15], and AutoRegressive Integrated Moving Average (ARIMA) models [16,17].…”
Section: Introductionmentioning
confidence: 99%