2012
DOI: 10.1016/j.eswa.2012.05.014
|View full text |Cite
|
Sign up to set email alerts
|

Nearest neighbor estimate of conditional mutual information in feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0
1

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(34 citation statements)
references
References 32 publications
0
33
0
1
Order By: Relevance
“…However, unlike the discrete case, computing CMI for continuous variables requires estimating probability density functions (pdf) from the available data samples, which is not a straightforward task, and in fact, considered as one of the main problems associated with applying this measure. With this regard, different possible non-parametric approaches exist in the literature, of which we select the k-nearest neighbour estimator [9,14]. Unlike many other approaches, this estimator allows accommodating high variable dimensionality (the estimator of the CMI for context variables must be able to work well with higher dimensions) and produces more accurate results.…”
Section: Computational Solutionmentioning
confidence: 98%
“…However, unlike the discrete case, computing CMI for continuous variables requires estimating probability density functions (pdf) from the available data samples, which is not a straightforward task, and in fact, considered as one of the main problems associated with applying this measure. With this regard, different possible non-parametric approaches exist in the literature, of which we select the k-nearest neighbour estimator [9,14]. Unlike many other approaches, this estimator allows accommodating high variable dimensionality (the estimator of the CMI for context variables must be able to work well with higher dimensions) and produces more accurate results.…”
Section: Computational Solutionmentioning
confidence: 98%
“…It should be noted that the above integrals may result in a heavy computation burden. Therefore, the above marginal entropies and joint entropy are transferred into a simpler expression using the k‐nearest neighbours algorithm (kNN) and the Kozachenko‐Leonenko estimator for Shannon entropies . According to the concept of nearest neighbour, Equation can be expressed as: Htrue(z,vtrue)=disz+disvNi=1Nϵ(i)+ψ(N)ψtrue(ktrue)+logtrue(cdiszcdisvtrue) where ϵz(i)/2 and ϵv(i)/2 are the distances between the same points projected into the z and v subspaces, respectively.…”
Section: Preliminariesmentioning
confidence: 99%
“…Based on the DMIS strategy, the moving windows with similar dynamics are assigned into a cluster. Mutual information is a Shannon entropy based technique that can measure the information shared by two datasets with different dimensions . Compared to PCA based similarity measurement methods, the mutual information method does not need any established model or iterative calculation.…”
Section: Introductionmentioning
confidence: 99%
“…It is worth noting that the estimation of mutual information through computing the integrals and summations is intensive and inefficient in Equation (17) in practice. In order to reduce the computation burden, the nearest neighbour strategy which based on the Kozachenko–Leonenko estimator of Shannon entropy has been proposed to work well for estimating the mutual information numerically . Firstly, the estimate of joint entropy through nearest neighbour technique is H(x,y)=ψ(l)+ψ(n)+log(cdxcdy)+dx+dyni=1nεfalse(ifalse) where ε(i)=max{εx(i),εy(i)} is the maximum Euclidean norm of the i th sample point…”
Section: Variable Moving Windows Based Non‐gaussian Dissimilarity Anamentioning
confidence: 99%