2012
DOI: 10.4304/jnw.7.11.1790-1795
|View full text |Cite
|
Sign up to set email alerts
|

Multi-variable Echo State Network Optimized by Bayesian Regulation for Daily Peak Load Forecasting

Abstract:

In this paper, a multi-variable echo state network trained with Bayesian regulation has been developed for the short-time load forecasting. In this study, we focus on the generalization of a new recurrent network. Therefore, Bayesian regulation and Levenberg-Marquardt algorithm is adopted to modify the output weight. The model is verified by data from a local power company in south China and its performance is rather satisfactory. Besides, traditional methods are also used for the same task… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Ideally, the structure of the network will be simplified while the accuracy remains. Thus, the generalization ability can be improved [21]. We only penalize the output weights, since the input weights and cycle connection weights have already been pruned when the highly constrained reservoir is designed.…”
Section: B Readout Regularizationmentioning
confidence: 99%
“…Ideally, the structure of the network will be simplified while the accuracy remains. Thus, the generalization ability can be improved [21]. We only penalize the output weights, since the input weights and cycle connection weights have already been pruned when the highly constrained reservoir is designed.…”
Section: B Readout Regularizationmentioning
confidence: 99%
“…Similarly, the inputs and outputs of the Y-direction model correspond to the Y-direction data, while those of the X-direction model correspond to data in all three directions. In addition, in order to train the neural network more effectively and improve its generalization ability [19], compared with Levenberg-Marquardt algorithm, which converges fast, in this paper, Bayesian regularization algorithm with penalty function is chosen. The number of layers in this network is set to three, in which the activation function of the hidden layer is the sigmoid function, the activation function of the output layer is the linear function, and the number of training iterations is 200.…”
Section: Model Constructing and Datamentioning
confidence: 99%
“…In the training of the ESN only a specific hour of the day is taken into account. In [25] the ESN is used for a multi-variate TS prediction and the reservoir is trained with a Bayesian regularization technique. In order to avoid overfitting in the regression step, neurons and redundant connections are pruned.…”
Section: Related Workmentioning
confidence: 99%
“…The smooth component, instead, is processed with an ESN and the final prediction is obtained integrating all the components. An interesting forecasting model using ESN on a multivariate TS can be found in [25], where the prediction is performed using a special ESN with a different reservoir for each variable: this allows to better catch the dynamic of each single variable. Because of the presence of multiple reservoirs, the number of output connections from the internal matrices to the output is huge and could lead to overfitting during the training; for this reason authors propose a method for pruning these connections.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation