For the benefit from accurate electricity price forecasting, not only can various electricity market stakeholders make proper decisions to gain profit in a competitive environment, but also power system stability can be improved. Nevertheless, because of the high volatility and uncertainty, it is an essential challenge to accurately forecast the electricity price. Considering that recurrent neural networks (RNNs) are suitable for processing time series data, in this paper, we propose a bidirectional long short-term memory (LSTM)-based forecasting model, BRIM, which splits the state neurons of a regular RNN into two parts: the forward states (using the historical electricity price information) are designed for processing the data in positive time direction and backward states (using the future price information available at inter-connected markets) for the data in negative time direction. Moreover, due to the fact that inter-connected power exchange markets show a common trend for other neighboring markets and can provide signaling information for each other, it is sensible to incorporate and exploit the impact of the neighboring markets on forecasting accuracy of electricity price. Specifically, future electricity prices of the interconnected market are utilized both as input features for forward LSTM and backward LSTM. By testing on day-ahead electricity prices in the European Power Exchange (EPEX), the experimental results show the superiority of the proposed method BRIM in enhancing predictive accuracy in comparison with the various benchmarks, and moreover Diebold-Mariano (DM) shows that the forecast accuracy of BRIM is not equal to other forecasting models, and thus indirectly demonstrates that BRIM statistically significantly outperforms other schemes.
As a distributed learning framework, Federated Learning (FL) allows different local learners/participants to collaboratively train a joint model without exposing their own local data, and offers a feasible solution to legally resolve data islands. However, among them, the data privacy and model security are two challenges. The former means that, if original data are used for trained FL models, various methods can be used to deduce the original data samples, thereby causing the leakage of data. The latter implies that unreliable/malicious participants may affect or destroy the joint FL model, through uploading wrong local model parameters. Therefore, this paper proposes a novel distributed FL training framework, namely LDP‐Fed+, which takes into account differential privacy protection and model security defense. Specifically, firstly, a local perturbation module is added at the local learner side, which perturbs the original data of local learners through feature extraction, binary encoding and decoding, and random response. Then, through using the perturbed data, local neural network model is trained to obtain the network parameters that meet local differential protection, to effectively deal with model inversion attacks. Secondly, a security defense module is added on the server side, which uses the auxiliary model and differential index mechanism to select an appropriate number of local disturbance parameters for aggregation to enhance model security defense and deal with membership inference attacks. The experimental results show that, compared with other federated learning models based on differential privacy, LDP‐Fed+ has stronger robustness for model security and higher accuracy for model training while ensuring strict privacy protection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.