2021
DOI: 10.1109/access.2020.3047903
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Deep Interval Prediction Model for Wind Speed Forecasting

Abstract: How to predict wind speed with high accuracy is a fundamental issue for the generation of wind power and energy management of power systems. Moreover, the nonlinear and non-stationary characteristics of wind speed make the task extremely challenging. To resolve this issue, a novel hybrid interval prediction model based on long short-term memory (LSTM) networks and variational mode decomposition (VMD) algorithm is developed in the frame of lower upper bound estimation in this study. Firstly, VMD is applied to d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 48 publications
0
8
0
Order By: Relevance
“…To optimally modify the hyperparameters related to the CBLSTMAE model, the CSO algorithm is utilized and thereby reduces the mean square error (MSE). The CSO approach is preferred over other optimization techniques due to its high parallelism and simplicity [37][38][39][40]. The CSO mimics the performance of a chicken swarm and the chicken movement; the CSO is described in the following: CSO contains various groups, and each group has some chicks, hens, and a predominant rooster [41].…”
Section: Level Ii: Cso-based Hyperparameter Tuning Processmentioning
confidence: 99%
“…To optimally modify the hyperparameters related to the CBLSTMAE model, the CSO algorithm is utilized and thereby reduces the mean square error (MSE). The CSO approach is preferred over other optimization techniques due to its high parallelism and simplicity [37][38][39][40]. The CSO mimics the performance of a chicken swarm and the chicken movement; the CSO is described in the following: CSO contains various groups, and each group has some chicks, hens, and a predominant rooster [41].…”
Section: Level Ii: Cso-based Hyperparameter Tuning Processmentioning
confidence: 99%
“…There are three main processes: preprocessing of dataset, deterministic wind power forecasting using XGBoost, and probabilistic wind power forecasting based on LUBE-LSTM (Lower Upper Bound Estimations-Long Short-Term Memory) model. The LUBE method was selected because it is advantageous to wind power forecasting; several previous studies [5,6,27] have verified its effectiveness on forecasts. LUBE can be directly constructed using the LSTM model, enabling predictors to select more input variables and generate two outputs (lower and upper bounds).…”
Section: Preprocessing Of Datasetsmentioning
confidence: 99%
“…e fusion combination is optimized by other prediction methods in different prediction stages, including input data stabilization, model parameter optimization, and output error correction. Based on empirical mode decomposition (EMD) [22][23][24][25], variational mode decomposition (VMD) [26][27][28][29], analytical mode decomposition (AMD) [30,31], the wavelet decomposition [14,25,32], and so on, the wind speed sequence data was preprocessed to make the data stable. Better prediction results are achieved.…”
Section: Introductionmentioning
confidence: 99%
“…According to the characteristics of the wind speed data, the intelligent algorithm is used to determine the parameters adaptively during the training process to improve the learning ability and generalization ability of the model. Genetic algorithm [36], particle swarm optimization algorithm [27], and cuckoo algorithm [37] are used to optimize the hybrid model combining the parameters and threshold values of BPNN, LSTM, SVM, and other intelligent learning models, which can overcome the problem of low prediction accuracy of a single model and improve the accuracy of wind speed prediction. e prediction results of the traditional method are substituted into the error model to correspond to the superposition and correct the error, which has strong universality and is not limited to the specific prediction process [38][39][40][41].…”
Section: Introductionmentioning
confidence: 99%