2013
DOI: 10.1109/tkde.2011.221
|View full text |Cite
|
Sign up to set email alerts
|

Clustering Uncertain Data Based on Probability Distribution Similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
100
0
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 142 publications
(102 citation statements)
references
References 28 publications
0
100
0
2
Order By: Relevance
“…Figure 1 shows the RSS values collected from a fixed location by 4 heterogeneous mobile devices, each bar in this figure is the average of 100 collected RSS samples with sampling rate of 1 Hz, and we also add standard error to Obviously, the RSS values from the same AP at a fixed location are uncertain due to many factors, that is, heterogeneous devices, indoor layout changes, and weather condition. Since Euclidean distance is not enough to measure the similarity of uncertainty data, we use both Euclidean distance and Kullback-Leibler (KL) Divergence [30] to measure the similarity between two location fingerprints.…”
Section: The Second Stage: Indoor Positioning Using Wifi Rssmentioning
confidence: 99%
“…Figure 1 shows the RSS values collected from a fixed location by 4 heterogeneous mobile devices, each bar in this figure is the average of 100 collected RSS samples with sampling rate of 1 Hz, and we also add standard error to Obviously, the RSS values from the same AP at a fixed location are uncertain due to many factors, that is, heterogeneous devices, indoor layout changes, and weather condition. Since Euclidean distance is not enough to measure the similarity of uncertainty data, we use both Euclidean distance and Kullback-Leibler (KL) Divergence [30] to measure the similarity between two location fingerprints.…”
Section: The Second Stage: Indoor Positioning Using Wifi Rssmentioning
confidence: 99%
“…JSD is symmetric, non-negative, and bounded. It is widely used in the community of uncertain data mining [45]. The distribution difference J(U i ||U j ) between the data object U i and U j is defined as:…”
Section: Dissimilarity Estimationmentioning
confidence: 99%
“…The smaller the KL-divergence, the more similar the two distributions. It shows effectiveness to measure the similarity between uncertain objects in the paper [27] because the distribution difference cannot be captured by geometric distances directly. KLdivergence is a non-symmetric measurement, which means the difference from distribution P to Q is generally not the same as that from Q to P .…”
Section: Relationship Outliers and Categorizationmentioning
confidence: 99%
“…We borrow the idea from [27]. We apply the continuous case in our data set since the measurement in this thesis is continuous.…”
Section: Using Kl Divergence As Similaritymentioning
confidence: 99%