Abstract. We put forward a zero-knowledge based definition of privacy. Our notion is strictly stronger than the notion of differential privacy and is particularly attractive when modeling privacy in social networks. We furthermore demonstrate that it can be meaningfully achieved for tasks such as computing averages, fractions, histograms, and a variety of graph parameters and properties, such as average degree and distance to connectivity. Our results are obtained by establishing a connection between zero-knowledge privacy and sample complexity, and by leveraging recent sublinear time algorithms.
Abstract.We introduce a new definition of privacy called crowdblending privacy that strictly relaxes the notion of differential privacy. Roughly speaking, k-crowd blending private sanitization of a database requires that each individual i in the database "blends" with k other individuals j in the database, in the sense that the output of the sanitizer is "indistinguishable" if i's data is replaced by j's.We demonstrate crowd-blending private mechanisms for histograms and for releasing synthetic data points, achieving strictly better utility than what is possible using differentially private mechanisms. Additionally, we demonstrate that if a crowd-blending private mechanism is combined with a "pre-sampling" step, where the individuals in the database are randomly drawn from some underlying population (as is often the case during data collection), then the combined mechanism satisfies not only differential privacy, but also the stronger notion of zero-knowledge privacy. This holds even if the pre-sampling is slightly biased and an adversary knows whether certain individuals were sampled or not. Taken together, our results yield a practical approach for collecting and privately releasing data while ensuring higher utility than previous approaches.
We introduce a generalization of differential privacy called tailored differential privacy, where an individual's privacy parameter is "tailored" for the individual based on the individual's data and the data set. In this paper, we focus on a natural instance of tailored differential privacy, which we call outlier privacy: an individual's privacy parameter is determined by how much of an "outlier " the individual is. We provide a new definition of an outlier and use it to introduce our notion of outlier privacy. Roughly speaking, (•)-outlier privacy requires that each individual in the data set is guaranteed " (k)-differential privacy protection", where k is a number quantifying the "outlierness" of the individual. We demonstrate how to release accurate histograms that satisfy (•)-outlier privacy for various natural choices of (•). Additionally, we show that (•)-outlier privacy with our weakest choice of (•)-which offers no explicit privacy protection for "non-outliers"-already implies a "distributional" notion of differential privacy w.r.t. a large and natural class of distributions.
The notion of zero-knowledge [GMR85] is formalized by requiring that for every malicious efficient verifier V * , there exists an efficient simulator S that can reconstruct the view of V * in a true interaction with the prover, in a way that is indistinguishable to every polynomialtime distinguisher. Weak zero-knowledge weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher D, there exists a (potentially different) simulator S D .In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson [GMW91] satisfies a "distributional" variant of zero-knowledge.Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the dense model theorem of Reingold et al (STOC '08), and the leakage lemma of Gentry-Wichs (STOC '11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zeroknowledge).
Recently, there has been a number of papers relating mechanism design and privacy (e.g., see [MT07, Xia11, CCK + 11, NST12, NOS12, HK12]). All of these papers consider a worst-case setting where there is no probabilistic information about the players' types. In this paper, we investigate mechanism design and privacy in the Bayesian setting, where the players' types are drawn from some common distribution. We adapt the notion of differential privacy to the Bayesian mechanism design setting, obtaining Bayesian differential privacy. We also define a robust notion of approximate truthfulness for Bayesian mechanisms, which we call persistent approximate truthfulness. We give several classes of mechanisms (e.g., social welfare mechanisms and histogram mechanisms) that achieve both Bayesian differential privacy and persistent approximate truthfulness. These classes of mechanisms can achieve optimal (economic) efficiency, and do not use any payments. We also demonstrate that by considering the above mechanisms in a modified mechanism design model, the above mechanisms can achieve actual truthfulness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.