Accurate savings estimations are important to promote energy efficiency projects and demonstrate their cost-effectiveness. The increasing presence of advanced metering infrastructure (AMI) in commercial buildings has resulted in a rising availability of high frequency interval data. These data can be used for a variety of energy efficiency applications such as demand response, fault detection and diagnosis, and heating, ventilation, and air conditioning (HVAC) optimization. This large amount of data has also opened the door to the use of advanced statistical learning models, which hold promise for providing accurate building baseline energy consumption predictions, and thus accurate saving estimations. The gradient boosting machine is a powerful machine learning algorithm that is gaining considerable traction in a wide range of data driven applications, such as ecology, computer vision, and biology. In the present work an energy consumption baseline modeling method based on a gradient boosting machine was proposed. To assess the performance of this method, a recently published testing procedure was used on a large dataset of 410 commercial buildings. The model training periods were varied and several prediction accuracy metrics were used to evaluate the model's performance. The results show that using the gradient boosting machine model improved the R-squared prediction accuracy and the CV(RMSE) in more than 80 percent of the cases, when compared to an industry best practice model that is based on piecewise linear regression, and to a random forest algorithm.
Trustworthy savings calculations are critical to convincing investors in energy efficiency projects of the benefit and cost-effectiveness of such investments and their ability to replace or defer supply-side capital investments. However, today's methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of efficiency projects. They also require time-consuming manual data acquisition and often do not deliver results until years after the program period has ended. The rising availability of "smart" meters, combined with new analytical approaches to quantifying savings, has opened the door to conducting M&V more quickly and at lower cost, with comparable or improved accuracy. These meter-and software-based approaches, increasingly referred to as "M&V 2.0", are the subject of surging industry interest, particularly in the context of utility energy efficiency programs. Program administrators, evaluators, and regulators are asking how M&V 2.0 compares with more traditional methods, how proprietary software can be transparently performance tested, how these techniques can be integrated into the next generation of whole-building focused efficiency programs. This paper expands recent analyses of public-domain whole-building M&V methods, focusing on more novel M&V2.0 modeling approaches that are used in commercial technologies, as well as approaches that are documented in the literature, and/or developed by the academic building research community. We present a testing procedure and metrics to assess the performance of whole-building M&V methods. We then illustrate the test procedure by evaluating the accuracy of ten baseline energy use models, against measured data from a large dataset of 537 buildings. The results of this study show that the already available advanced interval data baseline models hold great promise for scaling the adoption of building measured savings calculations using Advanced Metering Infrastructure (AMI) data. Median coefficient of variation of the root mean squared error (CV(RMSE)) was less than 25% for every model tested when twelve months of training data were used. With even six months of training data, median CV(RMSE) for daily energy total was under 25% for all models tested. These findings can be used to build confidence in model robustness, and the readiness of these approaches for industry uptake and adoption.
Trustworthy savings calculations are critical to convincing regulators of both the costeffectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today's methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data.The increasing availability of "smart" meters and devices that report near-real time data, combined with new analytical approaches to quantify savings, offers the potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer these 'M&V 2.0' capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline the M&V process.In this paper, we apply an automated whole-building M&V tool to historic data sets from energy efficiency programs to begin to explore the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. For the data sets studied we evaluate the fraction of buildings that are well suited to automated baseline characterization, the uncertainty in gross savings that is due to M&V 2.0 tools' model error, and indications of labor time savings, and how the automated savings results compare to prior, traditionally determined savings results. The results show that 70% of the buildings were well suited to the automated approach. In a majority of the cases (80%) savings and uncertainties for each individual building were quantified to levels above the criteria in ASHRAE Guideline 14. In addition the findings suggest that M&V 2.0 methods may also offer time-savings relative to traditional approaches. Finally we discuss the implications of these findings relative to the potential evolution of M&V, and pilots currently being launched to test how M&V automation can be integrated into ratepayer-funded programs and professional implementation and evaluation practice.
The surge in interval meter data availability and associated activity in energy data analytics has inspired new interest in advanced methods for building efficiency savings estimation. Statistical and machine learning approaches are being explored to improve the energy baseline models used to measure and verify savings. One outstanding challenge is the ability to identify and account for operational changes that may confound savings estimates. In the measurement and verification (M&V) context, 'non-routine events' (NREs) cause changes in building energy use that are not attributable to installed efficiency measures, and not accounted for in the baseline model's independent variables. In the M&V process NREs must be accounted for as 'adjustments' to appropriately attribute the estimated energy savings to the specific efficiency interventions that were implemented. Currently this is a manual and custom process, conducted using professional judgment and engineering expertise. As such it remains a barrier in scaling and standardizing meter-based savings estimation. In this work, a data driven methodology was developed to (partially) automate, and therefore streamline the process of detecting NREs in the post-retrofit period and making associated savings adjustments. The proposed NRE detection algorithm is based on a statistical change point detection method and a dissimilarity metric. The dissimilarity metric measures the proximity between the actual time series of the post-retrofit energy consumption and the projected baseline, which is generated using a statistical baseline model. The suggested approach for NRE adjustment involves the NRE detection algorithm, the M&V practitioner, and a regression modeling algorithm. The performance of the detection and adjustment algorithm was evaluated using a simulation-generated test data set, and two benchmark algorithms. Results show a high true positive detection rate (75%-100% across the test cases), higher than ideal false positive detection rates (20%-70%), and low errors in energy adjustment (<0.7%). These results indicate that the algorithm holds for helping M&V practitioners to streamline the process of handling NREs. Moreover, the change point algorithm and underlying statistical principles could prove valuable for other building analytics applications such as anomaly detection and fault diagnostics.
Re´sume´-Me´thode de criblage base´e sur les indices de sensibilite´DGSM : application au simulateur de re´servoir -Les simulateurs d'e´coulements en milieux poreux sont utilise´s pour effectuer des pre´visions de la production de gisements pe´troliers. Les mode`les de re´servoir e´tudie´s sont caracte´rise´s par un grand nombre de proprie´te´s qui sont souvent tre`s incertaines. Afin de construire des mode`les pre´dictifs il est donc ne´cessaire de re´duire cette incertitude en se focalisant sur les variables les plus influentes. Les me´thodes d'analyse de sensibilite´permettent de re´soudre ce proble`me, mais sont souvent tre`s couˆteuses en nombre de simulations. Afin de re´duire le nombre d'appels au simulateur des nouveaux indices, nomme´s DGSM (Derivativebased Global Sensitivity Measures) base´s sur la moyenne des de´rive´s partielles, ont e´teí ntroduits. Dans cet article, une version re´vise´e des indices DGSM est propose´e afin d'ame´liorer leur efficacite´et leur convergence dans le cas ou`tre`s peu de simulations peuvent eˆtre effectue´es. L'efficacite´de ces indices est montre´e sur des cas test analytiques ainsi que sur un mode`le synthe´tique de re´servoir.Abstract -Screening Method Using the Derivative-based Global Sensitivity Indices with Application to Reservoir Simulator -Reservoir simulator can involve a large number of uncertain input parameters. Sensitivity analysis can help reservoir engineers focusing on the inputs whose uncertainties have an impact on the model output, which allows reducing the complexity of the model. There are several ways to define the sensitivity indices. A possible quantitative definition is the variancebased sensitivity indices which can quantify the amount of output uncertainty due to the uncertainty of inputs. However, the classical methods to estimate such sensitivity indices in a high-dimensional problem can require a huge number of reservoir model evaluations. Recently, new sensitivity indices based on averaging local derivatives of the model output over the input domain have been introduced. These so-called Derivative-based Global Sensitivity Measures (DGSM) have been proposed to overcome the problem of dimensionality and are linked to total effect indices, which are variance-based sensitivity indices. In this work, we propose a screening method based on revised DGSM indices, which increases the interpretability in some complex cases and has a lower computational cost, as demonstrated by numerical test cases and by an application to a synthetic reservoir test model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.