Lecture Notes in Computer Science
DOI: 10.1007/978-3-540-71703-4_37
|View full text |Cite
|
Sign up to set email alerts
|

Protecting Individual Information Against Inference Attacks in Data Publishing

Abstract: Abstract. In many data-publishing applications, the data owner needs to protect sensitive information pertaining to individuals. Meanwhile, certain information is required to be published. The sensitive information could be considered as leaked, if an adversary can infer the real value of a sensitive entry with a high confidence. In this paper we study how to protect sensitive data when an adversary can do inference attacks using association rules derived from the data. We formulate the inference attack model,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 16 publications
0
13
0
Order By: Relevance
“…In this study, we consider the inference attack [3] as the targeted threat model. As described above, we consider that each user has two types of data: i) public data (e.g., her activity data) that she is willing to release for getting personalized recommendations, and ii) private data (e.g., gender) that she wants to keep private.…”
Section: Threat Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, we consider the inference attack [3] as the targeted threat model. As described above, we consider that each user has two types of data: i) public data (e.g., her activity data) that she is willing to release for getting personalized recommendations, and ii) private data (e.g., gender) that she wants to keep private.…”
Section: Threat Modelmentioning
confidence: 99%
“…For example, one's political affiliation can be inferred from her rating of TV shows [1]; one's gender can be inferred from her activities on location-based social networks [2]. These studies show that private data often suffers from inference attacks [3], where an adversary analyzes a user's public data to illegitimately gain knowledge about her private data. It is thus crucial to protect user private data when releasing public data to recommendation engines.…”
Section: Introductionmentioning
confidence: 99%
“…A simple example is the selling of user information to marketing companies. In short, there is no guarantee that third-party applications are benign, and that they use user data in accordance to the purpose of the applications [4]. While the above problem has to do with the misuse of legitimately accessible information, this work is instead about an even more challenging problem: i.e., through the extension API, malicious applications may obtain some private information for which they are not authorized.…”
Section: Proposed Systemmentioning
confidence: 99%
“…More precisely, Kifer has introduced the deFinetti attack [5] that aims at building a classifier predicting the sensitive attribute corresponding to a set of non-sensitive attributes. Finally, we refer the reader to [7] for a study evaluating the usefulness of some privacy-preserving techniques for preventing inference attacks.…”
Section: Background and Related Workmentioning
confidence: 99%