Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-preserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the responses to every query in Q, even when the number of queries is much larger than the number of rows in the database. Given a base synopsis generator that takes a distribution on Q and produces a "weak" synopsis that yields "good" answers for a majority of the weight in Q, our Boosting for Queries algorithm obtains a synopsis that is good for all of Q. We ensure privacy for the rows of the database, but the boosting is performed on the queries. We also provide the first synopsis generators for arbitrary sets of arbitrary lowsensitivity queries, i.e., queries whose answers do not vary much under the addition or deletion of a single row.In the execution of our algorithm certain tasks, each incurring some privacy loss, are performed many times. To analyze the cumulative privacy loss, we obtain an O(ε 2 ) bound on the expected privacy loss from a single ε-differentially private mechanism. Combining this with evolution of confidence arguments from the literature, we get stronger bounds on the expected cumulative privacy loss due to multiple mechanisms, each of which provides ε-differential privacy or one of its relaxations, and each of which operates on (potentially) different, adaptively chosen, databases. I. BACKGROUND AND SUMMARY OF RESULTSBoosting. Boosting is a general and widely used method for improving the accuracy of learning algorithms. (See [23] for an excellent survey.) Given a training set of labeled examples, {(x 1 , y 1 ), (x 2 , y 2 ), . . . , (x m , y m )}, where each x i is drawn from an underlying distribution D on a universe X, and y i ∈ {+1, −1}, a learning algorithm produces a hypothesis h : X → {+1, −1}. Ideally, h will "describe" not just the given samples, but also the underlying distribution. The goal of boosting is to convert
Differential privacy is a recent notion of privacy tailored to privacy-preserving data analysis [10]. Up to this point, research on differentially private data analysis has focused on the setting of a trusted curator holding a large, static, data set; thus every computation is a "one-shot" object: there is no point in computing something twice, since the result will be unchanged, up to any randomness introduced for privacy.However, many applications of data analysis involve repeated computations, either because the entire goal is one of monitoring, e.g., of traffic conditions, search trends, or incidence of influenza, or because the goal is some kind of adaptive optimization, e.g., placement of data to minimize access costs. In these cases, the algorithm must permit continual observation of the system's state. We therefore initiate a study of differential privacy under continual observation. We identify the problem of maintaining a counter in a privacy preserving manner and show its wide applicability to many different problems.
In this work, we introduce one-time programs, a new computational paradigm geared towards security applications. A one-time program can be executed on a single input, whose value can be specified at run time. Other than the result of the computation on this input, nothing else about the program is leaked. Hence, a one-time program is like a black box function that may be evaluated once and then "self destructs." This also extends to k-time programs, which are like black box functions that can be evaluated k times and then self destruct. One-time programs serve many of the same purposes of program obfuscation, the obvious one being software protection, but also including applications such as temporary transfer of cryptographic ability. Moreover, the applications of one-time programs go well beyond those of obfuscation, since one-time programs can only be executed once (or more generally, a limited number of times) while obfuscated programs have no such bounds. For example, one-time programs lead naturally to electronic cash or token schemes: coins are generated by a program that can only be run once, and thus cannot be double spent. Most significantly, the new paradigm of one-time computing opens new avenues for conceptual research. In this work we explore one such avenue, presenting the new concept of "one-time proofs," proofs that can only be verified once and then become useless and unconvincing. All these tasks are clearly impossible using software alone, as any piece of software can be copied and run again, enabling the user to execute the program on more than one input. All our solutions employ a secure memory device, inspired by the cryptographic notion of interactive oblivious transfer protocols, that stores two secret keys (k0, k1). The device takes as input a single bit b ∈ {0, 1}, outputs k b , and then self destructs. Using such devices, we demonstrate that for every input length, any standard program (Turing machine) can be efficiently compiled into a functionally equivalent one-time program. We also show how this memory device can
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.