Purpose
In the digital age, organizations want to build a more powerful machine learning model that can serve the increasing needs of people. However, enhancing privacy and data security is one of the challenges for machine learning models, especially in federated learning. Parties want to collaborate with each other to build a better model, but they do not want to reveal their own data. This study aims to introduce threats and defenses to privacy leaks in the collaborative learning model.
Design/methodology/approach
In the collaborative model, the attacker was the central server or a participant. In this study, the attacker is on the side of the participant, who is “honest but curious.” Attack experiments are on the participant’s side, who performs two tasks: one is to train the collaborative learning model; the second task is to build a generative adversarial networks (GANs) model, which will perform the attack to infer more information received from the central server. There are three typical types of attacks: white box, black box without auxiliary information and black box with auxiliary information. The experimental environment is set up by PyTorch on Google Colab platform running on graphics processing unit with labeled faces in the wild and Canadian Institute For Advanced Research-10 data sets.
Findings
The paper assumes that the privacy leakage attack resides on the participant’s side, and the information in the parameter server contains too much knowledge to train a collaborative machine learning model. This study compares the success level of inference attack from model parameters based on GAN models. There are three GAN models, which are used in this method: condition GAN, control GAN and Wasserstein generative adversarial networks (WGAN). Of these three models, the WGAN model has proven to obtain the highest stability.
Originality/value
The concern about privacy and security for machine learning models are more important, especially for collaborative learning. The paper has contributed experimentally to private attack on the participant side in the collaborative learning model.