2020
DOI: 10.3390/e22010071
|View full text |Cite
|
Sign up to set email alerts
|

Energy Disaggregation Using Elastic Matching Algorithms

Abstract: In this article an energy disaggregation architecture using elastic matching algorithms is presented. The architecture uses a database of reference energy consumption signatures and compares them with incoming energy consumption frames using template matching. In contrast to machine learning-based approaches which require significant amount of data to train a model, elastic matching-based approaches do not have a model training process but perform recognition using template matching. Five different elastic mat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 27 publications
(21 citation statements)
references
References 64 publications
3
18
0
Order By: Relevance
“…At the preprocessing of the aggregated signal a median filter of five samples was used for smoothing as proposed in [59], and afterwards the preprocessed signal was segmented in overlapping frames of length equal to L = 10 samples and time shift between successive frames equal to 5 samples. The optimal number of samples per frame was determined through grid search on a bootstrap dataset with ideal aggregated data (without ghost power), consisting of one dataset out of each database (ECO-2, REDD-2 and iAWE) similar as in [67,68].…”
Section: Prameterization and Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…At the preprocessing of the aggregated signal a median filter of five samples was used for smoothing as proposed in [59], and afterwards the preprocessed signal was segmented in overlapping frames of length equal to L = 10 samples and time shift between successive frames equal to 5 samples. The optimal number of samples per frame was determined through grid search on a bootstrap dataset with ideal aggregated data (without ghost power), consisting of one dataset out of each database (ECO-2, REDD-2 and iAWE) similar as in [67,68].…”
Section: Prameterization and Feature Selectionmentioning
confidence: 99%
“…In order to directly compare the proposed methodology with other approaches proposed in the literature we additionally tested our method on five selected loads from the REDD-2 dataset, namely the refrigerator, lighting, dishwasher, microwave, and furnace. These loads were used in [55] because they carry a large percentage of the overall consumed energy and they have been used in other publications [67,75]. Furthermore, the disaggregation results were evaluated both in a noisy (with ghost data) and a noiseless (with synthetic data) setup as in [75] for both the one-stage and the proposed two-stage fusion architecture.…”
Section: Devicementioning
confidence: 99%
“…First, machine learning has been used including Convolutional Neural Networks (CNNs) [3,4] or Long-Short-Term-Memory (LSTM) [5,6] as regression models for estimating the power consumption on device level as well as Hidden Markov Models (HMMs) [7,8]. Second, template matching techniques have been utilized for finding best matches between appliances and aggregated signatures using dynamic time warping [9] and elastic matching [10]. Third, source separation approaches have been used in order to separate the aggregated signal into its subcomponents, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…Data from Dataport had a lower sampling rate of 1/60 Hz or 1/3600 Hz. Schirmer et al [19] tested five different elastic matching algorithms in NILM based on REDD. The minimum variance matching (MVM) achieved the best results measured by both Accuracy and F 1 -Measure (definition in Section 2.6, Formula (6)) at 87.58% and 89.19% respectively.…”
Section: Introductionmentioning
confidence: 99%
“…The minimum variance matching (MVM) achieved the best results measured by both Accuracy and F 1 -Measure (definition in Section 2.6, Formula (6)) at 87.58% and 89.19% respectively. As noted by Schirmer et al [19], in contrast to the algorithms they used, approaches based on machine learning require a much larger dataset in order to train the model. De Paiva Penha and Castro [20] applied the convolutional neural network (CNN) approach to model the activity of six appliances in six houses using data from the REDD.…”
Section: Introductionmentioning
confidence: 99%