2020
DOI: 10.1109/tkde.2018.2878698
|View full text |Cite
|
Sign up to set email alerts
|

Deep Private-Feature Extraction

Abstract: We present and evaluate Deep Private-Feature Extractor (DPFE), a deep model which is trained and evaluated based on information theoretic constraints. Using the selective exchange of information between a user's device and a service provider, DPFE enables the user to prevent certain sensitive information from being shared with a service provider, while allowing them to extract approved information using their model. We introduce and utilize the log-rank privacy, a novel measure to assess the effectiveness of D… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

4
6

Authors

Journals

citations
Cited by 59 publications
(31 citation statements)
references
References 57 publications
0
31
0
Order By: Relevance
“…This should motivate future work on better defenses. For instance, techniques that learn only the features relevant to a given task [15,42,43] can potentially serve as the basis for "leastprivilege" collaboratively trained models. Further, it may be possible to detect active attacks that manipulate the model into learning extra features.…”
Section: Resultsmentioning
confidence: 99%
“…This should motivate future work on better defenses. For instance, techniques that learn only the features relevant to a given task [15,42,43] can potentially serve as the basis for "leastprivilege" collaboratively trained models. Further, it may be possible to detect active attacks that manipulate the model into learning extra features.…”
Section: Resultsmentioning
confidence: 99%
“…Recently, Sanyal et al have further decreased the computational complexity of encryptionbased methods using parallelization techniques, but their method still requires more than 100 seconds to be run on a 16-machine cluster [44]. Instead of encryption-based methods, an information theoretic approach is recently introduced in [45], where the main focus is on discarding information related to a single user-defined sensitive variable. However, the end-user may not have a complete understanding about what can be inferred from her data to define as sensitive variables.…”
Section: Learning With Privacymentioning
confidence: 99%
“…Transformations can reduce the amount of sensitive information in the data by reconstruction [12] or by projecting each data sample into a lower dimensional latent representation [13,28]. The information bottleneck in the hidden layers of neural networks helps to capture the main factors of variation in the data and to identify and obscure sensitive patterns in the latent representation [13], as well as during the reconstruction from the extracted low-dimensional representation [29,30].…”
Section: Filtering and Transformations Oursmentioning
confidence: 99%