Proceedings of the 7th International Conference on Collaborative Computing: Networking, Applications and Worksharing 2011
DOI: 10.4108/icst.collaboratecom.2011.247094
|View full text |Cite
|
Sign up to set email alerts
|

m-Privacy for Collaborative Data Publishing

Abstract: Abstract-In this paper, we consider the collaborative data publishing problem for anonymizing horizontally partitioned data at multiple data providers. We consider a new type of "insider attack" by colluding data providers who may use their own data records (a subset of the overall data) in addition to the external background knowledge to infer the data records contributed by other data providers. The paper addresses this new threat and makes several contributions. First, we introduce the notion of m-privacy, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(25 citation statements)
references
References 39 publications
0
25
0
Order By: Relevance
“…Specifically, most of the existing techniques assess the performance using traditional metrics only. Unlike those, the information loss is measured in terms of traditional metrics such as global certainty penalty [4,24], non-uniform entropy metric [25], normalized information loss [26,27], normalized certainty penalty [4], query error [11,28], sum of squared errors [29]. But, in proposed approach, Nayahi and Kavitha worked best to succeed in the anonymization using centroid-based replacement of QID values that is computationally superior to suppression in terms of information loss and less expensive than generalization.…”
Section: For Utility Preserving Data Clusteringmentioning
confidence: 99%
“…Specifically, most of the existing techniques assess the performance using traditional metrics only. Unlike those, the information loss is measured in terms of traditional metrics such as global certainty penalty [4,24], non-uniform entropy metric [25], normalized information loss [26,27], normalized certainty penalty [4], query error [11,28], sum of squared errors [29]. But, in proposed approach, Nayahi and Kavitha worked best to succeed in the anonymization using centroid-based replacement of QID values that is computationally superior to suppression in terms of information loss and less expensive than generalization.…”
Section: For Utility Preserving Data Clusteringmentioning
confidence: 99%
“…L diversity helps to overcome this problem. In current research paper [1], authors introduce a m privacy algorithm which verify anonymization and L diversity. For this they consider generalization and bucketization techniques for maintaining anonimized view of data and also provide L diversity which help to increase privacy of data.…”
Section: Fig1: Aggregate and Anonymizementioning
confidence: 99%
“…We propose to demonstrate DObjects+, a scalable and extensible framework that is aimed to enable privacy preserving data federation services. The framework extends our DObjects architecture [6], [8] with our ongoing work on distributed anonymization protocols [7], [5], [18] and secure query processing protocols [9] for a seamless access to distributed and possibly private data. We summarize the contributions of the demonstrated framework below.…”
Section: Contributionsmentioning
confidence: 99%
“…While the framework is orthogonal to different privacy principles, we studied several representative state-ofthe-art privacy principles within our framework including ldiversity [14], t-closeness [12], and differential privacy [3], [10]. We show the implications of adopting them in the distributed setting with respect to the above attack space and integrated new or modified notions and algorithms in our framework [7], [5], [18].…”
Section: Contributionsmentioning
confidence: 99%
See 1 more Smart Citation