Centralized machine learning methods require the aggregation of data collected from clients. Due to the awareness of data privacy, however, the aggregation of raw data collected by Internet of Things (IoT) devices is not feasible in many scenarios. Federated learning (FL), a kind of distributed learning framework, can be running on multiple IoT devices. It aims to resolve the issues of privacy leakage by training a model locally on the client-side, other than on the server-side that aggregates all the raw data. However, there are still threats of poisoning attacks in FL. Label flipping attacks, typical data poisoning attacks in FL, aim to poison the global model by sending model updates trained by the data with mismatched labels. The central parameter aggregation server is hard to detect the label flipping attacks due to its inaccessibility to the client in a typical FL system. In this work, we are motivated to prevent label flipping poisoning attacks by observing the changes in model parameters that were trained by different single labels. We propose a novel detection method called average weight of each class in its associated fully connected layer. In this method, we detect label flipping attacks by identifying the differences of classes in the data based on the weight assignments in a fully connected layer of the neural network model and use the statistical algorithm to recognize the malicious clients. We conduct extensive experiments on benchmark data like Fashion-MNIST and Intrusion Detection Evaluation Dataset (CIC-IDS2017). Comprehensive experimental results demonstrated that our method has the detection accuracy over 90% for the identification of the attackers flipping labels.
Federated learning has been popularly studied with people’s increasing awareness of privacy protection. It solves the problem of privacy leakage by its ability that allows many clients to train a collaborative model without uploading local data collected by Internet of Things (IoT) devices. However, there are still threats of privacy leakage in federated learning. The privacy inference attacks can reconstruct the privacy data of other clients based on GAN from the parameters in the process of iterations for global models. In this work, we are motivated to prevent GAN-based privacy inference attacks in federated learning. Inspired by the idea of gradient compression, we propose a defense method called Federated Learning Parameter Compression (FLPC) which can reduce the sharing of information for privacy protection. It prevents attackers from recovering the private information of victims while maintaining the accuracy of the global model. Extensive experimental results demonstrated that our method is effective in the prevention of GAN-based privacy inferring attacks. In addition, based on the experimental results, we propose a norm-based metric to assess the performance of privacy-preserving.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.