Machine Learning in Cyber Trust 2009
DOI: 10.1007/978-0-387-88735-7_10
|View full text |Cite
|
Sign up to set email alerts
|

Privacy Preserving Nearest Neighbor Search

Abstract: Data mining is frequently obstructed by privacy concerns. In many cases data is distributed, and bringing the data together in one place for analysis is not possible due to privacy laws (e.g. HIPAA) or policies. Privacy preserving data mining techniques have been developed to address this issue by providing mechanisms to mine the data while giving certain privacy guarantees. In this paper we address the issue of privacy preserving nearest neighbor search, which forms the kernel of many data mining applications… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
36
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(36 citation statements)
references
References 40 publications
0
36
0
Order By: Relevance
“…A two-party protocol for finding k-Nearest Neighbors was given in [SKK06], and improved from quadratic to linear communication complexity in [QA08]. Our protocol for finding the nearest neighbor is a more efficient protocol for the special case k = 1.…”
Section: Introductionmentioning
confidence: 99%
“…A two-party protocol for finding k-Nearest Neighbors was given in [SKK06], and improved from quadratic to linear communication complexity in [QA08]. Our protocol for finding the nearest neighbor is a more efficient protocol for the special case k = 1.…”
Section: Introductionmentioning
confidence: 99%
“…With the development of data mining and provide the various methods for privacy preserving, [8][9][10][11][12][13][14][15][16] Advantages and disadvantages of these methods and how to implement them better has been much discussed. Most of the proposed methods based on perturbation, randomization or anonymity.…”
Section: Related Workmentioning
confidence: 99%
“…In papers [12,15,16] be seen some methods with this approach that how do these algorithms is such that try distribute the calculations between data servers and finally get the results of calculation instead of data. Major disadvantage of this approach is that many of these algorithms have to disclosure new instances.…”
Section: Related Workmentioning
confidence: 99%
“…Related studies have been published focusing on not only simple analyses such as database queries with very specific inclusion/exclusion criteria but also sophisticated algorithms for prediction analysis including logistic regression [13,14], support vector machine (SVM) [15,16], knearest neighborhood [17], Cox regression [18], and tensor factorization [19]. However, most studies involve restrictive assumptions originating from the requirement that data should be integrated in a matrix format, either common feature events assumption for horizontally-partitioned data or common patient records assumption for verticallypartitioned data.…”
Section: Introductionmentioning
confidence: 99%