This work presents an alternative metric for evaluating the quality of solar forecasting models. Some conventional approaches use quantities such as the root-mean-square-error (RMSE) and/or correlation coefficients to evaluate model quality. The direct use of statistical quantities to assign forecasting quality can be misleading because these metrics do not convey a measure of the variability of the time-series for the solar irradiance data. In contrast, the quality metric proposed here, which is defined as the ratio of solar uncertainty to solar variability, compares the forecasting error with the solar variability directly. By making the forecasting error to variability comparisons for different time windows, we show that this ratio is essentially a statistical invariant for each forecast model employed, i.e., the ratio is preserved for widely different time horizons when the same time averaging periods are used, and therefore provides a robust way to compare solar forecasting skills. We employ the proposed metric to evaluate two new forecasting models proposed here, and compare their performances with a persistence model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.