Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge.We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing stateof-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
Machine learning models leak information about the datasets on which they are trained. An adversary can build an algorithm to trace the individual members of a model's training dataset. As a fundamental inference attack, he aims to distinguish between data
Federated learning (FL) enables many data owners (e.g., mobile devices) to train a joint ML model (e.g., a nextword prediction classifier) without the need of sharing their private training data. However, FL is known to be susceptible to poisoning attacks by malicious participants (e.g., adversaryowned mobile devices) who aim at hampering the accuracy of the jointly trained model through sending malicious inputs during the federated training process. In this paper, we present a generic framework for model poisoning attacks on FL. We show that our framework leads to poisoning attacks that substantially outperform state-of-the-art model poisoning attacks by large margins. For instance, our attacks result in 1.5× to 60× higher reductions in the accuracy of FL models compared to previously discovered poisoning attacks. Our work demonstrates that existing Byzantine-robust FL algorithms are significantly more susceptible to model poisoning than previously thought. Motivated by this, we design a defense against FL poisoning, called divide-and-conquer (DnC). We demonstrate that DnC outperforms all existing Byzantine-robust FL algorithms in defeating model poisoning attacks, specifically, it is 2.5× to 12× more resilient in our experiments with different datasets and models.
In response to the growing popularity of Tor and other censorship circumvention systems, censors in nondemocratic countries have increased their technical capabilities and can now recognize and block network traffic generated by these systems on a nationwide scale. New censorship-resistant communication systems such as SkypeMorph, StegoTorus, and CensorSpoofer aim to evade censors' observations by imitating common protocols like Skype and HTTP.We demonstrate that these systems completely fail to achieve unobservability. Even a very weak, local censor can easily distinguish their traffic from the imitated protocols. We show dozens of passive and active methods that recognize even a single imitated session, without any need to correlate multiple network flows or perform sophisticated traffic analysis.We enumerate the requirements that a censorship-resistant system must satisfy to successfully mimic another protocol and conclude that "unobservability by imitation" is a fundamentally flawed approach. We then present our recommendations for the design of unobservable communication systems.
Abstract-Decoy routing is a recently proposed approach for censorship circumvention. It relies on cooperating ISPs in the middle of the Internet to deploy the so called "decoy routers" that proxy network traffic from users in the censorship region. A recent study, published in an award-winning CCS 2012 paper [24], suggested that censors in highly connected countries like China can easily defeat decoy routing by selecting Internet routes that do not pass through the decoys. This attack is known as "routing around decoys" (RAD).In this paper, we perform an in-depth analysis of the true costs of the RAD attack, based on actual Internet data. Our analysis takes into account not just the Internet topology, but also business relationships between ISPs, monetary and performance costs of different routes, etc. We demonstrate that even for the most vulnerable decoy placement assumed in the RAD study, the attack is likely to impose tremendous costs on the censoring ISPs. They will be forced to switch to much more costly routes and suffer from degradation in the quality of service.We then demonstrate that a more strategic placement of decoys will further increase the censors' costs and render the RAD attack ineffective. We also show that the attack is even less feasible for censors in countries that are not as connected as China since they have many fewer routes to choose from.The first lesson of our study is that defeating decoy routing by simply selecting alternative Internet routes is likely to be prohibitively expensive for the censors. The second, even more important lesson is that a fine-grained, data-driven approach is necessary for understanding the true costs of various route selection mechanisms. Analyses based solely on the graph topology of the Internet may lead to mistaken conclusions about the feasibility of decoy routing and other censorship circumvention techniques based on interdomain routing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.