While gradient aggregation playing a vital role in federated or collaborative learning, recent studies have revealed that gradient aggregation may suffer from some attacks, such as gradient inversion, where the private training data can be recovered from the shared gradients. However, the performance of the existing attack methods is limited because they usually require prior knowledge in Batch Normalization and could only reconstruct a single image or a small batch one. To make the attacks less restrictive and more applicable, we propose an effective and practical gradient inversion method in this paper. Specifically, we use cosine similarity to measure the difference of gradients between the synthesized and ground-truth images, and then construct an input regularization for the fully connected layer to ensure the fidelity of the image. Moreover, we apply the total variation denoising strategy to the convolution feature map for further improving the smoothness of the reconstructed image. Experimental results demonstrate that our method can reconstruct high fidelity training data on a large batch size for complex data sets, such as ImageNet.
Differentially Private Stochastic Gradient Descent (DP-SGD) is a prime method for training machine learning models with rigorous privacy guarantees. Since its birth, DP-SGD has gained popularity and has been widely adopted in both academic and industrial research. One well-known challenge when using DP-SGD is how to improve utility while maintaining privacy. To this end, recently we have seen several proposals that clip the gradients with adaptive thresholds rather than a fixed one. Although each proposal comes with some theoretical justification, the theories often rely on strong assumptions and are not compatible with each other. It is hard to know whether they are good in practice and how good they are. In this paper, we investigate adaptive clipping in DP-SGD from an empirical perspective. With extensive experiments, we were able to gain some fresh insights and proposed two new adaptive clipping strategies based on them. We cross-compared the existing methods and our new strategies experimentally. Results showed that our strategies did provide a substantial improvement in model accuracy, and outperformed the state-of-the-art adaptive clipping methods consistently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.