2019
DOI: 10.1109/tkde.2018.2840974
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Social Media Data Publishing for Personalized Ranking-Based Recommendation

Abstract: Personalized recommendation is crucial to help users find pertinent information. It often relies on a large collection of user data, in particular users' online activity (e.g., tagging/rating/checking-in) on social media, to mine user preference. However, releasing such user activity data makes users vulnerable to inference attacks, as private data (e.g., gender) can often be inferred from the users' activity data. In this paper, we proposed PrivRank, a customizable and continuous privacy-preserving social med… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(33 citation statements)
references
References 36 publications
0
33
0
Order By: Relevance
“…9) represent whether a user's social status (defined in [45] as the number of followers divided by the number of followings) is greater than 5 or not. In [9], the social status is regarded as a private attribute that can be leaked by the user's behaviour and should be protected. We set 5 to be a threshold, because mobility patterns of people are substantially different when their social status is greater than 5 [45].…”
Section: A1 Experimental Comparison Among Attributes In Manhattanmentioning
confidence: 99%
“…9) represent whether a user's social status (defined in [45] as the number of followers divided by the number of followings) is greater than 5 or not. In [9], the social status is regarded as a private attribute that can be leaked by the user's behaviour and should be protected. We set 5 to be a threshold, because mobility patterns of people are substantially different when their social status is greater than 5 [45].…”
Section: A1 Experimental Comparison Among Attributes In Manhattanmentioning
confidence: 99%
“…The inference-prevention schemes are more focused on prevention by protectively processing and perturbing the location information prior to disclosure. The latter can be further broken down into location data perturbation techniques represented by [17,30,45,27,47,48] and trajectory inference prevention techniques represented by [7,8,34,35,36].…”
Section: Related Workmentioning
confidence: 99%
“…Location data perturbation schemes consists of perturbation through dummies [25,21], information-theoretic approaches [47,48], spatial location cloaking [5,11,17,19,24,30,45] and differential privacy [4,6,9,10,22,23]. The goal of location data perturbation is to perturb users' real location information so that the injected uncertainty can resist potential attacks made by adversaries.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…(ii) when we know exact or approximate information on the input distributions (e.g., when we can use public datasets [11], [12] to learn approximate distributions of locations of male/female users if we want to obfuscate the attribute male/female). For the scenario (i), we clarify how much perturbation noise should be added to provide f -divergence DistP when we use an existing mechanism for obfuscating point data.…”
Section: Introductionmentioning
confidence: 99%