2010
DOI: 10.1007/978-3-642-13739-6_3
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Privacy Risk of Publishing Decision Trees

Abstract: Abstract. Publishing decision trees can provide enormous benefits to the society. Meanwhile, it is widely believed that publishing decision trees can pose a potential risk to privacy. However, there is not much investigation on the privacy consequence of publishing decision trees. To understand this problem, we need to quantitatively measure privacy risk.Based on the well-established maximum entropy theory, we have developed a systematic method to quantify privacy risks when decision trees are published. Our m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…However, the computed decision tree provides no privacy guarantee. Zhu and Du show that publishing decision trees without formal guarantee threatens individual privacy [16]. Recently, a few differentially private classifiers have been proposed [17].…”
Section: B Experimentsmentioning
confidence: 99%
“…However, the computed decision tree provides no privacy guarantee. Zhu and Du show that publishing decision trees without formal guarantee threatens individual privacy [16]. Recently, a few differentially private classifiers have been proposed [17].…”
Section: B Experimentsmentioning
confidence: 99%
“…Tree-based models can be sensitive due to their business value and because they contain information about the data on which they were trained. For instance, Zhu and Du [193] quantify the privacy risks associated with publishing decision trees and show that a maximum entropy estimate can leak information about the training data. As a result, it is crucial for privacy-preserving solutions to protect the trained model.…”
Section: Leakage Taxonomymentioning
confidence: 99%