Wind energy prediction represents an important and active field in the renewable energy sector. Since renewable energy sources are integrated into existing grids and combined with traditional sources, knowing the amount of energy that will be produced is key in minimizing the operational cost of the wind farm and safe operation of the power grid. In this context, we propose a comparative and comprehensive study of artificial neural networks, support vector regression, random trees, and random forest, and present the pros and cons of implementing the aforementioned techniques. A step-by-step approach based on the CRISP-DM data mining framework reveals the thought process end-to-end, including feature engineering, metrics selection, model selection, or hyperparameter tuning. Using the selected metrics for model evaluation, we provide a summary highlighting the optimal results and the trade-off between performance and the resources expended to achieve these results. This research is also intended to provide guidance for wind energy professionals, filling the gap between purely academic research and real-world business use cases, providing the exact architectures and selected hyperparameters.
In the recent years digital transformation became one of the most used approaches in building energy consumption optimization. Increased interest in improving energy sustainability and comfort inside buildings has created an opportunity for digital transformation to build predictive tools for energy consumption. By retrofitting or implementing new construction technologies nowadays the quantity and quality of the operational data collected has reached unprecedented levels. This data must be consumed by implementing powerful predictive tools that will provide the needed level of certainty. Adopting Six Sigma's Define, Measure, Analyze, Improve, Control (DMAIC) cycle as predictive analytics framework will make this paper accessible for both professionals working in energy industry and researchers that are developing models, creating the premises for reducing the gap between research and real-world business, guiding the use of data. Moreover, the selected strategy for preprocessing and hyperparameter selection is presented, the final selected models showing scalability and flexibility. At the end the architectures, performance and training time are discussed and then coupled with the thought process providing a way to weigh up the options. Building energy consumption prediction, it is a relevant and actual topic. Firstly, on European level, meeting the targets set by the new European Green Deal for buildings sector is relying heavily on digitization and therefore on predictive analytics. Secondly, on Romania level, the liberalization of the Energy market created an unpreceded energy price increase. The negative social impact might be diminished not only by the price reduction, but also by understanding how the energy is consumed.
Air quality forecasting is very difficult to achieve in metropolitan areas due to: pollutants emission dynamics, high population density and uncertainty in defining meteorological conditions. The use of data, which contain insufficient information within the model training, and the poor selection of the model to be used limits the air quality prediction accuracy. In this study, the prediction of NO2 concentration is made for the year 2022 using a long short-term memory network (LSTM) and a gated recurrent unit (GRU). this is an improvement in terms of performance compared to traditional methods. Data used for predictive modeling are obtained from the National Air Quality Monitoring Network. The KPIs(key performance indicator) are computed based on the testing data subset when the NO2 predicted values are compared to the real known values. Further, two additional predictions were performed for two days outside the modeling dataset. The quality of the data is not as expected, and so, before building the models, the missing data had to be imputed. LSTM and GRU performance in predicting NO2 levels is similar and reasonable with respect to the case study. In terms of pure generalization capabilities, both LSTM and GRU have the maximum R2 value below 0.8. LSTM and GRU represent powerful architectures for time-series prediction. Both are highly configurable, so the probability of identifying the best suited solution for the studied problem is consequently high.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.