The requirement for data sharing and privacy has brought increasing attention to federated learning. However, the existing aggregation models are too specialized and deal less with users’ withdrawal issue. Moreover, protocols for multiparty entity matching are rarely covered. Thus, there is no systematic framework to perform federated learning tasks. In this paper, we systematically propose a privacy-preserving federated learning framework (PFLF) where we first construct a general secure aggregation model in federated learning scenarios by combining the Shamir secret sharing with homomorphic cryptography to ensure that the aggregated value can be decrypted correctly only when the number of participants is greater than
t
. Furthermore, we propose a multiparty entity matching protocol by employing secure multiparty computing to solve the entity alignment problems and a logistic regression algorithm to achieve privacy-preserving model training and support the withdrawal of users in vertical federated learning (VFL) scenarios. Finally, the security analyses prove that PFLF preserves the data privacy in the honest-but-curious model, and the experimental evaluations show PFLF attains consistent accuracy with the original model and demonstrates the practical feasibility.
As an important approach to overcome data silos and privacy concerns in deep learning, federated learning, which can jointly train the global model and keep data local, has shown remarkable performance in a range of industrial applications. However, federated learning still suffers from the problem that shared gradients may be subject to tampering, inference functions, and falsification. To address this issue, we propose a verifiable federated learning framework to deal with malicious aggregators. Initially, we propose a reputation calculation mechanism to solve the problem of selecting a reliable aggregator based on a multiweight subjective logic model. Furthermore, we design a verifiable federated learning scheme to ensure data confidentiality, integrity, and verifiability, as well as support the client's dynamic withdrawal. Security analyses indicate that our framework is secure against malicious adversaries. Furthermore, experimental results on real datasets show that our verifiable federated learning has high accuracy and feasible efficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.