Energy consumption in buildings is responsible for 40 % of the final energy consumption in the European Union and the United States of America. In addition to thermal energy, buildings require electricity for all kinds of appliances. Regulatory constraints such as energy labels aim at increasing the energy efficiency of large appliances such as fridges and washing machines. However, they only partially cover plug-loads. The amount of electricity consumption of unregulated plug-loads such as mobile phones, USB chargers and kettles is continuously increasing. For European households, their share of electricity consumption reached 25 % in 2018. Additional data about the plug-loads usage can help decrease the energy consumption of buildings by improving energy management systems, applying peak-shaving or demand-side management. People live and work in buildings, making such data privacy sensitive. Federated Learning (FL) helps to leverage these data without violating regulatory frameworks such as the General Data Protection Regulation. We use a high-frequency energy data set of office appliances (BLOND) to train four appliance classifiers (CNN, LSTM, ResNet and DenseNet). We investigate the effect of different data distributions (entire dataset, IID and non-IID) and training methods on four performance metrics (accuracy, F1 score, precision and recall). The results show that a non-IID setup decreases all performance metrics for some model architectures by 44 %. However, our LSTM model even with a non-IID labels achieves similar F1 scores compared to central training. Additionally, we show the importance of client selection in FL architectures to reduce the overall training time and we quantify the decrease in network traffic compared to a central training approach, the energy consumption and scalability.
Convolutional neural networks are showing promise in the automatic diagnosis of thoracic pathologies on chest x-rays. Their blackbox nature has sparked many recent works to explain the prediction via input feature attribution methods (aka saliency methods). However, input feature attribution methods merely identify the importance of input regions for the prediction and lack semantic interpretation of model behavior. In this work, we first identify the semantics associated with internal units (feature maps) of the network. We proceed to investigate the following questions; Does a regression model that is only trained with COVID-19 severity scores implicitly learn visual patterns associated with thoracic pathologies? Does a network that is trained on weakly labeled data (e.g. healthy, unhealthy) implicitly learn pathologies? Moreover, we investigate the effect of pretraining and data imbalance on the interpretability of learned features. In addition to the analysis, we propose semantic attribution to semantically explain each prediction. We present our findings using publicly available chest pathologies (CheXpert [5], NIH ) and , and COVID-19 chest X-ray segmentation dataset [4]). The Code 1 is publicly available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.