2006
DOI: 10.1007/s10458-006-5951-y
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Loss in Distributed Constraint Reasoning: A Quantitative Framework for Analysis and its Applications

Abstract: It is critical that agents deployed in real-world settings, such as businesses, offices, universities and research laboratories, protect their individual users' privacy when interacting with other entities. Indeed, privacy is recognized as a key motivating factor in the design of several multiagent algorithms, such as in distributed constraint reasoning (including both algorithms for distributed constraint optimization (DCOP) and distributed constraint satisfaction (DisCSPs)), and researchers have begun to pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(34 citation statements)
references
References 22 publications
0
30
0
Order By: Relevance
“…If this value does not end up being assigned in the solution, the agent revealed some private information that could not have been deduced from only viewing the solution. Thus, followup work focused on the question of how to measure this privacy loss (Franzin, Rossi, Freuder, & Wallace, 2004;Maheswaran, Pearce, Bowring, Varakantham, & Tambe, 2006), analyzing how much information specific algorithms lose (Greenstadt, Pearce, & Tambe, 2006), and on the question of how to alter existing DisCSP algorithms to handle stricter privacy demands (Greenstadt, Grosz, & Smith, 2007;Léauté & Faltings, 2009). Very recently, work that provides formal guarantees for some DisCSP algorithms has emerged (Léauté & Faltings, 2009;Grinshpoun & Tassa, 2014).…”
Section: Privacymentioning
confidence: 99%
“…If this value does not end up being assigned in the solution, the agent revealed some private information that could not have been deduced from only viewing the solution. Thus, followup work focused on the question of how to measure this privacy loss (Franzin, Rossi, Freuder, & Wallace, 2004;Maheswaran, Pearce, Bowring, Varakantham, & Tambe, 2006), analyzing how much information specific algorithms lose (Greenstadt, Pearce, & Tambe, 2006), and on the question of how to alter existing DisCSP algorithms to handle stricter privacy demands (Greenstadt, Grosz, & Smith, 2007;Léauté & Faltings, 2009). Very recently, work that provides formal guarantees for some DisCSP algorithms has emerged (Léauté & Faltings, 2009;Grinshpoun & Tassa, 2014).…”
Section: Privacymentioning
confidence: 99%
“…These works try to reduce privacy loss of existing DisCSP/DCOP algorithms. Metrics based on the Valuations of Possible States (Maheswaran et al, 2006) framework are usually considered to quantify the reduction in privacy loss. Greenstadt et al (2007) present the DPOP with Secret Sharing (SSDPOP) algorithm, which is an extension of DPOP based on the efficient cryptographic technique of secret sharing.…”
Section: Anonymity In Multi-agent Problem Solvingmentioning
confidence: 99%
“…Metrics have been proposed to evaluate constraint privacy loss in algorithms, in particular for distributed meeting scheduling (Franzin, Freuder, Rossi, & Wallace, 2004;Wallace & Freuder, 2005). Maheswaran, Pearce, Bowring, Varakantham, and Tambe (2006) designed a framework called Valuation of Possible States (VPS) that they used to measure constraint privacy loss in the OptAPO and SynchBB algorithms, and they considered the impact of whether the problem topology is public or only partially known to the agents. Greenstadt et al (2006) also applied VPS to evaluate DPOP and ADOPT on meeting scheduling problems, under the assumption that the problem topology is public.…”
Section: Initial Knowledge Assumptionsmentioning
confidence: 99%