Monitoring electricity consumption in residential buildings is an important way to help reduce energy usage. Nonintrusive load monitoring is a technique to separate the total electrical load of a single household into specific appliance loads. This problem is difficult because we aim to extract the energy consumption of each appliance by only using the total electrical load. Deep transfer learning is expected to solve this problem. This paper proposes a deep neural network model based on an attention mechanism. This model improves the traditional sequence-to-sequence model with a time-embedding layer and an attention layer so that it can be better applied in nonintrusive load monitoring. In particular, the improved model abandons the recurrent neural network structure and shortens the training time, which means it is more appropriate for use in model pretraining with large datasets. To verify the validity of the model, we selected three open datasets and compared them with the current leading model. The results show that transfer learning can effectively improve the prediction ability of the model, and the model proposed in this study has a better performance than the most advanced available model.
Nonintrusive load monitoring (NILM) analyzes only the main circuit load information with an algorithm to decompose the load, which is an important way to help reduce energy usage. Recent research shows that deep learning has become popular for this problem. However, the ability of a neural network to extract load features depends on its structure. Therefore, more research is required to determine the best network architecture. This study proposed two deep neural networks based on the attention mechanism to improve the current sequence to point (s2p) learning model. The first model employs Bahdanau style attention and RNN layers, and the second model replaces the RNN layer with a self-attention layer. The two models are both based on a time embedding layer. Therefore, they can be better applied in NILM. To verify the effectiveness of the algorithms, we selected two open datasets and compared them with the original s2p model. The results show that attention mechanisms can effectively improve the model’s performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.