2022
DOI: 10.1007/s10458-021-09543-5
|View full text |Cite
|
Sign up to set email alerts
|

An explainable assistant for multiuser privacy

Abstract: Multiuser Privacy (MP) concerns the protection of personal information in situations where such information is co-owned by multiple users. MP is particularly problematic in collaborative platforms such as online social networks (OSN). In fact, too often OSN users experience privacy violations due to conflicts generated by other users sharing content that involves them without their permission. Previous studies show that in most cases MP conflicts could be avoided, and are mainly due to the difficulty for the u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 83 publications
0
15
0
Order By: Relevance
“…One way to do this would be to enable our model to reason about the Theory-of-Mind of both individuals and groups, in different contexts [44], such that explanations can be tailored to the users' knowledge about privacy norms and communicated through dialogue. While our model is interpretable and allows scrutiny, the social process by which explanations would be made requires the design and validation of the explanations themselves and the best method to visualize and/or convey them [35,40].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One way to do this would be to enable our model to reason about the Theory-of-Mind of both individuals and groups, in different contexts [44], such that explanations can be tailored to the users' knowledge about privacy norms and communicated through dialogue. While our model is interpretable and allows scrutiny, the social process by which explanations would be made requires the design and validation of the explanations themselves and the best method to visualize and/or convey them [35,40].…”
Section: Discussionmentioning
confidence: 99%
“…For instance, [22,50,51,53] propose mechanisms to resolve the multi-party privacy management conflicts that arise in social media. More recently, [38][39][40] define and evaluate a valuealigned and explainable agent for managing multi-user privacy conflicts.…”
Section: Related Workmentioning
confidence: 99%
“…Our approach shows that explanations ought to take place at the moment that users are confronted with a decision: many participants note that their frustration not only, or even primarily, stems from the decision itself, but from the lack of support or information they receive when wanting to enquire about it. This links to the issues of positionality faced by Explainable AI: beyond transparency of technical specifics, there is a need for a "relational transparency" [115] geared towards the kinds of explanations that users from different backgrounds and with different backgrounds require [74,87,88]. Such transparency needs to be built on robust usability studies, keeping in mind that, instead of devising two (or more) versions of the same software to serve different classes (genders, ethnicities, etc.…”
Section: Exclusionmentioning
confidence: 99%
“…Privacy assistants that work side by side with humans in a decentralized manner could serve to address this problem. Privacy assistants have been developed for various privacy assistance including checking for privacy violations [11], resolving privacy conflicts among humans [20,25], recommending sharing policies [8,23], and signaling if a piece of content is private [12,24]. While doing these tasks, it is important for the privacy assistant to be able to explain its decisions to the user.…”
Section: Introductionmentioning
confidence: 99%