Microgrid is becoming an essential part of the power grid regarding reliability, economy, and environment. Renewable energies are main sources of energy in microgrids. Long-term solar generation forecasting is an important issue in microgrid planning and design from an engineering point of view. Solar generation forecasting mainly depends on solar radiation forecasting. Long-term solar radiation forecasting can also be used for estimating the degradation-rate-influenced energy potentials of photovoltaic (PV) panel. In this paper, a comparative study of different deep learning approaches is carried out for forecasting one year ahead hourly and daily solar radiation. In the proposed method, state of the art deep learning and machine learning architectures like gated recurrent units (GRUs), long short term memory (LSTM), recurrent neural network (RNN), feed forward neural network (FFNN), and support vector regression (SVR) models are compared. The proposed method uses historical solar radiation data and clear sky global horizontal irradiance (GHI). Even though all the models performed well, GRU performed relatively better compared to the other models. The proposed models are also compared with traditional state of the art methods for long-term solar radiation forecasting, i.e., random forest regression (RFR). The proposed models outperformed the traditional method, hence proving their efficiency.
-As a communication technology plays an integral part in a power system, security issues become major concerns. This paper deals with the security problems in the distribution automation system (DAS) which has an inherent vulnerability to cyber attacks due to its high dependency on the communication and geographically widely spread terminal devices.We analyze the types of cyber threats in many applications of the distribution system and formulate security goals. Then we propose an efficient security protocol to achieve these goals. The protocol avoids complex computation of any encryption algorithm, considering resource-constraint network nodes. We also propose a secure key distribution protocol. Finally we demonstrate the feasibility of the proposed security protocol by experiments.
Lattice Boltzmann Method (LBM) is a powerful numerical simulation method of the fluid flow. With its data parallel nature, it is a promising candidate for a parallel implementation on a GPU. The LBM, however, is heavily data intensive and memory bound. In particular, moving the data to the adjacent cells in the streaming computation phase incurs a lot of uncoalesced accesses on the GPU which affects the overall performance. Furthermore, the main computation kernels of the LBM use a large number of registers per thread which limits the thread parallelism available at the run time due to the fixed number of registers on the GPU. In this paper, we develop high performance parallelization of the LBM on a GPU by minimizing the overheads associated with the uncoalesced memory accesses while improving the cache locality using the tiling optimization with the data layout change. Furthermore, we aggressively reduce the register uses for the LBM kernels in order to increase the run-time thread parallelism. Experimental results on the Nvidia Tesla K20 GPU show that our approach delivers impressive throughput performance: 1210.63 Million Lattice Updates Per Second (MLUPS).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.