As a popular machine learning framework, federated learning (FL) enables clients to conduct cooperative training without sharing data, thus having higher security than conventional machine learning. However, by sharing parameters in the federated learning process, the attacker can still obtain private information from the sensitive data of participants by reverse parsing. Recently, local differential privacy (LDP) has worked well in preserving privacy for federated learning. However, it faces the inherent problem of balancing privacy, model performance, and algorithm efficiency. In this paper, we propose a novel local differential privacy method in federated learning (SLDP-FL), which achieves the privacy amplification effect by the client self-sampling and provides compressed and private parameters in each iteration by a compressed LDP mechanism. Thereby, it improves the model performance as well as efficiency observably.Moreover, we theoretically analyze the relationship between the model accuracy and client self-sampling probability. A restrictive client self-sampling technology is proposed, which eliminates the randomness of self-sampling probability settings in existing studies and improves the utilization of the federated system. Comprehensive experiments on MNIST and Fashion-MNIST datasets show that the SLDP-FL optimizes the existing federated learning framework through compression mechanism and self-sampling technique with restrictive probability since it is superior to the current algorithms' accuracy and convergence and communication efficiency.
Federated learning (FL) pours vitality into developing data-driven AI. However, there are still some challenges, such as balancing the security and efficiency in FL. Differential privacy is one of the dominant means in privacy-preserving machine learning. Local differential privacy (LDP) further realizes the confidentiality of the server by perturbing the transmitting parameters, which is naturally applicable for the decentralized FL. However, the current research exists the weaknesses of low communication efficiency and poor adaptability in complex deep learning models. In this work, we propose an efficiency-optimized LDP data perturbation mechanism (Adaptive-Harmony), which allows adaptive parameter range to reduce variance and improve model accuracy. Specifically, each client in each round adaptively selects perturbation parameters according to model training. Furthermore, only 1-bit data transmission for each dimension of the model parameters, thus significantly reducing the communication overhead. Theoretical analysis and proof have shown that Adaptive-Harmony holds the same asymptotic error bounds and convergence performance as advanced works but with minimal communication costs. An LDP-FL framework (Optimal LDP-FL) is also proposed, taking Adaptive-Harmony as the core. We also introduce a parameter shuffling in the Optimal LDP-FL, which avoids server tracking clients through the model parameters, thereby improving privacy levels without consuming the privacy budget. Comprehensive experiments on the MNIST and Fashion MNIST datasets show that the proposed method can significantly reduce computational and communication costs with the same level of privacy and model utility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.