Proceedings of the 5th International Workshop on Non-Intrusive Load Monitoring 2020
DOI: 10.1145/3427771.3427855
|View full text |Cite
|
Sign up to set email alerts
|

Explainable NILM Networks

Abstract: There has been an explosion in the literature recently on Nonintrusive load monitoring (NILM) approaches based on neural networks and other advanced machine learning methods. However, though these methods provide competitive accuracy, the inner workings of these models is less clear. Understanding the outputs of the networks help in improving the designs, highlights the relevant features and aspects of the data used for making the decision, provides a better picture of the accuracy of the models (since a singl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Explainable AI (XAI) attempts to promote a more transparent and trustworthy AI through the creation of methods that make the function and predictions of machine learning systems comprehensible to humans, without sacrificing performance levels [ 50 ]. Explainable NILM networks proposed by [ 51 ] try to understand the inner workings of the machine learning models used for NILM.…”
Section: A Brief Nilm Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Explainable AI (XAI) attempts to promote a more transparent and trustworthy AI through the creation of methods that make the function and predictions of machine learning systems comprehensible to humans, without sacrificing performance levels [ 50 ]. Explainable NILM networks proposed by [ 51 ] try to understand the inner workings of the machine learning models used for NILM.…”
Section: A Brief Nilm Literature Reviewmentioning
confidence: 99%
“…Understanding the outputs of the networks contributes to improving the NILM model structure, highlights the relevant features and aspects of the data used for making the decision, provides a clearer picture of the accuracy of the models (since a single accuracy number is often insufficient), as well as inherently provides a level of trust in the value of the provided consumption feedback to the NILM end-user. Murray et al [ 51 , 110 ] investigated how eXplainable AI (XAI-based) approaches can be used to explain the inner workings for NILM deep learning models and examined why the network performs or does not perform well in certain cases. Explainable AI is utilized to analyze input data and address biases, especially when the NILM algorithms are tested in unseen houses, in order to improve the performance of the models [ 110 ].…”
Section: Trustworthiness In Nilm Algorithms: Can We Trust Ai In Nilm ...mentioning
confidence: 99%
“…x max and x min correspond to maximal and minimal values. These values can be maximal or minimal values of the training dataset, parameters fixed by the authors [53], or quantile values [40]. In order to make the statistics of the data less sensitive to outliers, [44] transformed them with an arcsinh before normalizing.…”
Section: Preprocessingmentioning
confidence: 99%
“…x max and x min correspond to maximal and minimal values. These values can be maximal or minimal values of the training dataset, parameters fixed by the authors [68], or quantile values [69]. In order to make the statistics of the data less sensitive to outliers, [70] transformed them with an arcsinh before normalizing.…”
Section: Preprocessingmentioning
confidence: 99%