Abstract-Distributed mobile crowd sensing is becoming a valuable paradigm, enabling a variety of novel applications built on mobile networks and smart devices. However, this trend brings several challenges, including the need for crowdsourcing platforms to manage interactions between applications and the crowd (participants or workers). One of the key functions of such platforms is spatial task assignment which assigns sensing tasks to participants based on their locations. Task assignment becomes critical when participants are hesitant to share their locations due to privacy concerns. In this paper, we examine the problem of spatial task assignment in crowd sensing when participants utilize spatial cloaking to obfuscate their locations. We investigate methods for assigning sensing tasks to participants, efficiently managing location uncertainty and resource constraints. We propose a novel two-stage optimization approach which consists of global optimization using cloaked locations followed by a local optimization using participants' precise locations without breaching privacy. Experimental results using both synthetic and real data show that our methods achieve high sensing coverage with low cost using cloaked locations.
This paper considers the problem of secure data aggregation (mainly summation) in a distributed setting, while ensuring differential privacy of the result. We study secure multiparty addition protocols using well known security schemes: Shamir’s secret sharing, perturbation-based, and various encryptions. We supplement our study with our new enhanced encryption scheme EFT, which is efficient and fault tolerant. Differential privacy of the final result is achieved by either distributed Laplace or Geometric mechanism (respectively DLPA or DGPA), while approximated differential privacy is achieved by diluted mechanisms. Distributed random noise is generated collectively by all participants, which draw random variables from one of several distributions: Gamma, Gauss, Geometric, or their diluted versions. We introduce a new distributed privacy mechanism with noise drawn from the Laplace distribution, which achieves smaller redundant noise with efficiency. We compare complexity and security characteristics of the protocols with different differential privacy mechanisms and security schemes. More importantly, we implemented all protocols and present an experimental comparison on their performance and scalability in a real distributed environment. Based on the evaluations, we identify our security scheme and Laplace DLPA as the most efficient for secure distributed data aggregation with privacy.
This paper considers the problem of secure data aggregation in a distributed setting while preserving differential privacy for the aggregated data. In particular, we focus on the secure sum aggregation. Security is guaranteed by secure multiparty computation protocols using well known security schemes: Shamir's secret sharing, perturbation-based, and various encryption schemes. Differential privacy of the final result is achieved by distributed Laplace perturbation mechanism (DLPA). Partial random noise is generated by all participants, which draw random variables from Gamma or Gaussian distributions, such that the aggregated noise follows Laplace distribution to satisfy differential privacy. We also introduce a new efficient distributed noise generation scheme with partial noise drawn from Laplace distributions.We compare the protocols with different privacy mechanisms and security schemes in terms of their complexity and security characteristics. More importantly, we implemented all protocols, and present an experimental comparison on their performance and scalability in a real distributed environment.
Abstract-In this paper, we consider the collaborative data publishing problem for anonymizing horizontally partitioned data at multiple data providers. We consider a new type of "insider attack" by colluding data providers who may use their own data records (a subset of the overall data) in addition to the external background knowledge to infer the data records contributed by other data providers. The paper addresses this new threat and makes several contributions. First, we introduce the notion of m-privacy, which guarantees that the anonymized data satisfies a given privacy constraint against any group of up to m colluding data providers. Second, we present heuristic algorithms exploiting the equivalence group monotonicity of privacy constraints and adaptive ordering techniques for efficiently checking m-privacy given a set of records. Finally, we present a data provider-aware anonymization algorithm with adaptive mprivacy checking strategies to ensure high utility and m-privacy of anonymized data with efficiency. Experiments on real-life datasets suggest that our approach achieves better or comparable utility and efficiency than existing and baseline algorithms while providing m-privacy guarantee.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.