2020
DOI: 10.1109/access.2020.3017655
|View full text |Cite
|
Sign up to set email alerts
|

Industrial Ultra-Short-Term Load Forecasting With Data Completion

Abstract: Accurate and efficient ultra-short-term load forecasting is crucial for industrial power users to have stabilized and optimized operations. In this paper, we develop novel strategies for industrial power users to handle their challenges in ultra-short-term load forecasting. Firstly, this paper proposes a two-way Genetic Algorithm Back Propagation Neural Networks (GABPNN) missing data completion model to handle data loss, which is common power load data mining. A particle swarm optimization-supporting vector re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Its performance metrics are generally also better than those of MLNN, RNN, and VARMA; however, implementing all steps and finalizing results took a long time. VARMA yielded the lowest R2${R^2}$ and the highest MAPE, indicating that VARMA is less effective than other methods in week‐ahead peak load forecasting. Although both MLNN and RNN can learn from historical data, no feature extraction was performed, and so have poorer values of R2${R^2}$, MAPE and RMSE. The execution times (seconds) required by the proposed method, hybrid CNN (max tmp) and hybrid CNN (grid search) exceed one hour to optimally tune the topology and hyperparameters of the CNN by GA (outer loop, see Section 3.5) and train parameters and synaptic weights of the hybrid CNN by Adam optimizer (inner loop, see Section 3.5), as shown in Table 2. Because the most appropriate topology and/or hyperparameters of MLNN, RNN, SVR+PSO and VARMA herein were given from [10, 16, 25, 26], the execution times shown in Table 2 for these benchmark models only involved the training time. After the proposed hybrid CNN was well‐tuned by GA and trained by Adam optimizer, 146 sets of testing samples (20% of total datasets) were studied. The testing time was only 0.39 s for these testing samples (i.e.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Its performance metrics are generally also better than those of MLNN, RNN, and VARMA; however, implementing all steps and finalizing results took a long time. VARMA yielded the lowest R2${R^2}$ and the highest MAPE, indicating that VARMA is less effective than other methods in week‐ahead peak load forecasting. Although both MLNN and RNN can learn from historical data, no feature extraction was performed, and so have poorer values of R2${R^2}$, MAPE and RMSE. The execution times (seconds) required by the proposed method, hybrid CNN (max tmp) and hybrid CNN (grid search) exceed one hour to optimally tune the topology and hyperparameters of the CNN by GA (outer loop, see Section 3.5) and train parameters and synaptic weights of the hybrid CNN by Adam optimizer (inner loop, see Section 3.5), as shown in Table 2. Because the most appropriate topology and/or hyperparameters of MLNN, RNN, SVR+PSO and VARMA herein were given from [10, 16, 25, 26], the execution times shown in Table 2 for these benchmark models only involved the training time. After the proposed hybrid CNN was well‐tuned by GA and trained by Adam optimizer, 146 sets of testing samples (20% of total datasets) were studied. The testing time was only 0.39 s for these testing samples (i.e.…”
Section: Simulation Resultsmentioning
confidence: 99%
“…The execution times (seconds) required by the proposed method, hybrid CNN (max tmp) and hybrid CNN (grid search) exceed one hour to optimally tune the topology and hyperparameters of the CNN by GA (outer loop, see Section 3.5) and train parameters and synaptic weights of the hybrid CNN by Adam optimizer (inner loop, see Section 3.5), as shown in Table 2. Because the most appropriate topology and/or hyperparameters of MLNN, RNN, SVR+PSO and VARMA herein were given from [10, 16, 25, 26], the execution times shown in Table 2 for these benchmark models only involved the training time.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In [30], a hybrid method based on Elman neural network (ENN) and PSO is proposed. Reference [31] proposes a genetic-algorithmbased backpropagation neural network (GABPNN) considering data loss. Also, a particle swarm optimizationsupporting vector regression (PSO-SVR) algorithm is further used to integrate the GABPNN results with better accuracy.…”
Section: Introductionmentioning
confidence: 99%