2020
DOI: 10.1109/mprv.2019.2918540
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Machine Learning for Privacy-Preserving Pervasive Systems

Abstract: Our everyday interactions with pervasive systems generate traces that capture various aspects of human behavior and enable machine learning algorithms to extract latent information about users. In this paper, we propose a machine learning interpretability framework that enables users to understand how these generated traces violate their privacy.With the emergence of connected devices (e.g., smartphones and smartmeters), pervasive systems generate growing amounts of digital traces as users undergo their everyd… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…Great progress has been made in the areas of privacy, fairness or explainability. For example, many privacyfriendly techniques for the use of data sets and learning algorithms have been developed, using methods where AI systems' "sight" is "darkened" via cryptography, differential or stochastic privacy (Ekstrand, Joshaghani, and Mehrpouyan 2018;Baron and Musolesi 2017;Duchi, Jordan, and Wainwright 2013;Singla et al 2014). Nevertheless, this contradicts the observation that AI has been making such massive progress for several years precisely because of the large amounts of (personal) data available.…”
Section: Loyalty To Guidelinesmentioning
confidence: 99%
“…Great progress has been made in the areas of privacy, fairness or explainability. For example, many privacyfriendly techniques for the use of data sets and learning algorithms have been developed, using methods where AI systems' "sight" is "darkened" via cryptography, differential or stochastic privacy (Ekstrand, Joshaghani, and Mehrpouyan 2018;Baron and Musolesi 2017;Duchi, Jordan, and Wainwright 2013;Singla et al 2014). Nevertheless, this contradicts the observation that AI has been making such massive progress for several years precisely because of the large amounts of (personal) data available.…”
Section: Loyalty To Guidelinesmentioning
confidence: 99%
“…As researchers, we usually strive to enhance utility of applications and algorithms, and often use personalisation as a tool to increase utility. While this is important, an increasing body of work has also emphasized the importance of privacy preservation and the use of less sensitive data [16,20,33,55,58,59]. Personalization and privacy preservation are at the two opposite ends of the spectrum because personalisation has typically required more personal data to provide high utility, while privacy preservation aims at providing reasonable utility from the application, while preserving privacy of users from known risks.…”
Section: Discussionmentioning
confidence: 99%
“…We believe that designing ubicomp technology for joint privacy and utility, and not only for personalisation, is important for the advancement of the field in a progressive and ethical manner. Recent literature further discusses why new privacy preservation techniques are needed by pointing out that simple anonymization techniques are no longer enough to preserve user privacy [16].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations