2012
DOI: 10.1109/tit.2012.2195549
|View full text |Cite
|
Sign up to set email alerts
|

Estimation of Nonlinear Functionals of Densities With Confidence

Abstract: This paper introduces a class of k-nearest neighbor (k-NN) estimators called bipartite plug-in (BPI) estimators for estimating integrals of non-linear functions of a probability density, such as Shannon entropy and Rényi entropy. The density is assumed to be smooth, have bounded support, and be uniformly bounded from below on this set. Unlike previous k-NN estimators of non-linear density functionals, the proposed estimator uses data-splitting and boundary correction to achieve lower mean square error. Specifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
64
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 55 publications
(65 citation statements)
references
References 37 publications
1
64
0
Order By: Relevance
“…For a broad class of density-plug-in estimators, that includes the common kernel density and k nearest neighbor (kNN) plug-in estimators, we have developed a generally applicable theory that gives analytical closed-form expressions for asymptotic bias and MSE in terms of the sample size, the dimension of the feature space, and the underlying feature probability distribution. These results appear in the technical report [6] co-authored by co-PI's Hero and Raich and the supported University of Michigan graduate student Kumar Sricharan. This report incorporates comparisons to state of the art divergence and entropy estimation algorithms and is an extension of the report cited in last year's progress report.…”
Section: Expressions For Divergence Estimator Bias Variance and A Cltmentioning
confidence: 73%
See 1 more Smart Citation
“…For a broad class of density-plug-in estimators, that includes the common kernel density and k nearest neighbor (kNN) plug-in estimators, we have developed a generally applicable theory that gives analytical closed-form expressions for asymptotic bias and MSE in terms of the sample size, the dimension of the feature space, and the underlying feature probability distribution. These results appear in the technical report [6] co-authored by co-PI's Hero and Raich and the supported University of Michigan graduate student Kumar Sricharan. This report incorporates comparisons to state of the art divergence and entropy estimation algorithms and is an extension of the report cited in last year's progress report.…”
Section: Expressions For Divergence Estimator Bias Variance and A Cltmentioning
confidence: 73%
“…We have obtained the (to-date) sharpest asymptotic expressions for estimator bias, variance and a CLT for a wide class of information divergence estimators [6], [7]. The utility of these expressions is that they can be used to optimize over tuning parameters of the fusion criterion, thereby circumventing the need for manual parameter tuning.…”
Section: Technical Accomplishmentsmentioning
confidence: 99%
“…This in turn can be used to predict and optimize performance in applications like structure discovery in graphical models and dimension estimation for support sets of low intrinsic dimension. See [20] for more details on these applications.…”
Section: Discussionmentioning
confidence: 99%
“…to show that convergence in distribution to N (0, 1) holds in our case as both N and M get large. These ideas are rigorously treated in Appendix E, [20].…”
Section: Theorem Iii3 the Asymptotic Distribution Of The Normalizedmentioning
confidence: 99%
See 1 more Smart Citation