A forecast interval is effective for handling the forecast uncertainty in solar photovoltaic systems. In estimating the forecast interval, most available approaches apply an identical policy to all the point forecasts. This results in an inefficient interval (e.g. an unnecessarily wide interval for an accurate forecast). They also adopt a complex model and even require modification of the available deterministic forecasting model, which may adversely affect their application. To overcome these limitations, the authors introduce a forecast uncertainty-aware forecast interval. They calculate a forecast accuracy-related uncertainty metric from an ensemble method based on the dropout technique. The dropout technique is widely used in deep learning models. This implies that the proposed approach can be applied to available deep learning forecasting models without modifying them. Using the uncertainty metric and relevant data of previous forecast results, they estimate the uncertainty-aware forecast interval. Through experiments using real-world data, they first demonstrate the close relation of their uncertainty metric to the forecasting accuracy. Then, they demonstrate that the uncertainty-aware forecast interval reduces the mean interval length by up to 25.7% and decreases the prediction interval coverage probability by 4.07%, compared to available approaches. This illustrates that their approach results in an effective interval.
For better deep learning forecasting systems for photovoltaic systems, confidence information about a point forecast is necessary in practical cases where uncertainties are unavoidable. In this study, using Bayesian deep learning, the authors introduce a confidence-aware deep learning forecasting system that provides confidence information as well as a point forecast. Through the experiments using the real-world data, they first solve three main issues caused by when Bayesian deep learning is applied to the forecasting of daily solar irradiance using weather forecast: selection of neural network model, selection of validation data to be used for estimating the confidence information, and ways for estimating the confidence information. Then, they examine the feasibility of the confidence-aware deep learning forecasting system in estimating the confidence information. From the experiments, classifying the forecast outputs into confident outputs and non-confident outputs using the confidence information, they show that maximum absolute percentage error of confident forecast outputs and non-confident forecast outputs are 5 and 22.8% at a specific classification threshold, respectively. This result shows that their confidence-aware deep learning forecasting system is good to estimate meaningful confidence information that is closely related to the forecast accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.