2013
DOI: 10.1613/jair.3983
|View full text |Cite
|
Sign up to set email alerts
|

Protecting Privacy through Distributed Computation in Multi-agent Decision Making

Abstract: As large-scale theft of data from corporate servers is becoming increasingly common, it becomes interesting to examine alternatives to the paradigm of centralizing sensitive data into large databases. Instead, one could use cryptography and distributed computation so that sensitive data can be supplied and processed in encrypted form, and only the final result is made known. In this paper, we examine how such a paradigm can be used to implement constraint satisfaction, a technique that can solve a broad class … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(10 citation statements)
references
References 46 publications
0
10
0
Order By: Relevance
“…Privacy and Security: Many privacy models have been adopted in multi-agent planning according to three different criteria: the information model (imposed privacy [19], induced privacy [20]), the information-sharing scheme (MA-STRIPS [19], subset privacy [21]), and practical privacy guarantees (no privacy [22], weak privacy [23], object cardinality privacy [24], and strong privacy [9]). Privacy can be divided into different categories, such as agent privacy, model privacy, decision privacy, topology privacy, and constraint privacy [4,14]. Here we introduce some widely used types of privacy.…”
Section: Privacy and Security Assumptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Privacy and Security: Many privacy models have been adopted in multi-agent planning according to three different criteria: the information model (imposed privacy [19], induced privacy [20]), the information-sharing scheme (MA-STRIPS [19], subset privacy [21]), and practical privacy guarantees (no privacy [22], weak privacy [23], object cardinality privacy [24], and strong privacy [9]). Privacy can be divided into different categories, such as agent privacy, model privacy, decision privacy, topology privacy, and constraint privacy [4,14]. Here we introduce some widely used types of privacy.…”
Section: Privacy and Security Assumptionmentioning
confidence: 99%
“…Privacy-preserving plans represent plans that do not actively disclose sensitive private information. In fact, privacy preservation is the goal pursued by multi-agent planning, which has been a crucial concern for multi-agent systems in some contexts, such as agent negotiation [12], multi-agent reinforcement learning and policy iteration [4,5], deep learning [13], and distributed constraint optimization problems (DCOPs) [14][15][16]. Multi-agent planning (MAP) in cooperative environments aims at generating a sequence of actions to fulfill some specified goals [17].…”
Section: Introductionmentioning
confidence: 99%
“…Many recent pieces of work explore deception [23] and privacy [24]. For example, deception was investigated in two classes, simulation and dissimulation [23], privacy was investigated in five classes, agent privacy, model privacy, decision privacy, topology privacy, and constraint privacy [25,26]. In cooperative and adversarial environment, a goal-driven intelligent agent would manipulate the goals in the goal-pursuing process, such as sharing or revealing the goals among teammates and meanwhile hide the goals against adversaries.…”
Section: Introductionmentioning
confidence: 99%
“…[1][2][3][4][5][6][7] We consider multiagent, cooperative, simultaneous decision making in partially observable and stochastic environments. Here, simultaneous signifies that each agent decides on actions over multiple decision variables at once.…”
Section: Introductionmentioning
confidence: 99%