Abstract:The massive penetration of wind generators in existing electrical grids is causing several critical issues, which are pushing system operators to enhance their operation functions in order to mitigate the effects produced by the intermittent and non-programmable generation profiles. In this context, the integration of wind forecasting and reliability models based on experimental data represents a strategic tool for assessing the impact of generators and grid operation state on the available power profiles. Unfortunately, field data acquired by Supervisory Control and Data Acquisition systems can be characterized by outliers and incoherent data, which need to be properly detected and filtered in order to avoid large modeling errors. To deal with this challenging issue, in this paper a novel methodology fusing Fuzzy clustering techniques, and probabilistic-based anomaly detection algorithms are proposed for wind data filtering and data-driven generator modeling
This paper summarizes the report prepared by an IEEE PES Task Force. Resilience is a fairly new technical concept for power systems, and it is important to precisely delineate this concept for actual applications. As a critical infrastructure, power systems have to be prepared to survive rare but extreme incidents (natural catastrophes, extreme weather events, physical/cyber-attacks, equipment failure cascades, etc.) to guarantee power supply to the electricity-dependent economy and society. Thus, resilience needs to be integrated into planning and operational assessment to design and operate adequately resilient power systems. Quantification of resilience as a key performance indicator is important, together with costs and reliability. Quantification can analyze existing power systems and identify resilience improvements in future power systems. Given that a 100% resilient system is not economic (or even technically achievable), the degree of resilience should be transparent and comprehensible. Several gaps are identified to indicate further needs for research and development.
The availability of massive amounts of temporal data opens new perspectives of knowledge extraction and automated decision making for companies and practitioners. However, learning forecasting models from data requires a knowledgeable data science or machine learning (ML) background and expertise, which is not always available to end-users. This gap fosters a growing demand for frameworks automating the ML pipeline and ensuring broader access to the general public. Automatic machine learning (AutoML) provides solutions to build and validate machine learning pipelines minimizing the user intervention. Most of those pipelines have been validated in static supervised learning settings, while an extensive validation in time series prediction is still missing. This issue is particularly important in the forecasting community, where the relevance of machine learning approaches is still under debate. This paper assesses four existing AutoML frameworks (AutoGluon, H2O, TPOT, Auto-sklearn) on a number of forecasting challenges (univariate and multivariate, single-step and multi-step ahead) by benchmarking them against simple and conventional forecasting strategies (e.g., naive and exponential smoothing). The obtained results highlight that AutoML approaches are not yet mature enough to address generic forecasting tasks once compared with faster yet more basic statistical forecasters. In particular, the tested AutoML configurations, on average, do not significantly outperform a Naive estimator. Those results, yet preliminary, should not be interpreted as a rejection of AutoML solutions in forecasting but as an encouragement to a more rigorous validation of their limits and perspectives.
Abstract:The massive penetration of wind generators in electrical power systems asks for effective wind power forecasting tools, which should be high reliable, in order to mitigate the effects of the uncertain generation profiles, and fast enough to enhance power system operation. To address these two conflicting objectives, this paper advocates the role of knowledge discovery from big-data, by proposing the integration of adaptive Case Based Reasoning models, and cardinality reduction techniques based on Partial Least Squares Regression, and Principal Component Analysis. The main idea is to learn from a large database of historical climatic observations, how to solve the wind-forecasting problem, avoiding complex and time-consuming computations. To assess the benefits derived by the application of the proposed methodology in complex application scenarios, the experimental results obtained in a real case study will be presented and discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.