Deep neural networks have achieved state-of-art performance in many domains including computer vision, natural language processing and self-driving cars. However, they are very computationally expensive and memory intensive which raises significant challenges when it comes to deploy or train them on strict latency applications or resource-limited environments. As a result, many attempts have been introduced to accelerate and compress deep learning models, however the majority were not able to maintain the same accuracy of the baseline models. In this paper, we describe EnSyth, a deep learning ensemble approach to enhance the predictability of compact neural network's models. First, we generate a set of diverse compressed deep learning models using different hyperparameters for a pruning method, after that we utilise ensemble learning to synthesise the outputs of the compressed models to compose a new pool of classifiers. Finally, we apply backward elimination on the generated pool to explore the best performing combinations of models. On CIFAR-10, CIFAR-5 data-sets with LeNet-5, EnSyth outperforms the predictability of the baseline model.
Federated Learning (FL) is an innovative area of machine learning that enables different clients to collaboratively generate a shared model while preserving their data privacy. In a typical FL setting, a central model is updated by aggregating the clients' parameters of the respective artificial neural network. The aggregated parameters are then sent back to the clients. However, two main challenges are associated with the central aggregation approach. Firstly, most state-of-the-art strategies are not optimised to operate in the presence of certain types of non-iid (not independent and identically distributed) applications and datasets. Secondly, federated learning is vulnerable to various privacy and security concerns related to model inversion attacks, which can be used to access sensitive information from the training data.To address these issues, we propose a novel federated learning strategy FedNets based on ensemble learning. Instead of sharing the parameters of the clients over the network to update a single global model, our approach allows clients to have ensembles of diverse-lightweight models and collaborate by sharing ensemble members. FedNets utilises graph embedding theory to reduce the complexity of running Deep Neural Networks (DNNs) on resource-limited devices. Each Deep Neural Network (DNN) is treated as a graph, from which respective graph embeddings are generated and clustered to determine which part of the DNN should be shared with other clients. Our approach outperforms state-of-the-art FL algorithms such as Federated Averaging (Fed-Avg) and Adaptive Federated Optimisation (Fed-Yogi) in terms of accuracy; on the Federated CIFAR100 dataset (non-iid), FedNets demonstrates a remarkable 63% and 92% improvement in accuracy, respectively. Furthermore, FedNets does not compromise the client's privacy, as it is safeguarded by the design of the method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.