2001
DOI: 10.1002/rsa.10019
|View full text |Cite
|
Sign up to set email alerts
|

Convergence properties of functional estimates for discrete distributions

Abstract: Suppose P is an arbitrary discrete distribution on a countable alphabet .Given an i.i.d. sample X 1 X n drawn from P, we consider the problem of estimating the entropy H P or some other functional F = F P of the unknown distribution P. We show that, for additive functionals satisfying mild conditions (including the cases of the mean, the entropy, and mutual information), the plug-in estimates of F are universally consistent. We also prove that, without further assumptions, no rate-of-convergence results can be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
186
0
4

Year Published

2007
2007
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 200 publications
(193 citation statements)
references
References 21 publications
3
186
0
4
Order By: Relevance
“…Moreover, there is no closed-form expression for the entropy rate of a general HMM, but, as outlined below, it is fairly easy to obtain an accurate approximation when the distribution of the HMM is known a priori, via the Shannon-McMillan-Breiman theorem (12). That is, the value of the entropy rate H = H(X) can be estimated accurately as long as it is possible to get a close approximation for the probability p n (X n 1 ) of a long random sample X n 1 .…”
Section: Hidden Markov Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, there is no closed-form expression for the entropy rate of a general HMM, but, as outlined below, it is fairly easy to obtain an accurate approximation when the distribution of the HMM is known a priori, via the Shannon-McMillan-Breiman theorem (12). That is, the value of the entropy rate H = H(X) can be estimated accurately as long as it is possible to get a close approximation for the probability p n (X n 1 ) of a long random sample X n 1 .…”
Section: Hidden Markov Modelsmentioning
confidence: 99%
“…A, necessarily incomplete, sample of the theory that has been developed can be found in [1][2][3][4][5][6][7][8][9][10][11][12][13][14] and the references therein. Examples of numerous different applications are contained in the above list, as well as in [15][16][17][18][19][20][21][22][23][24][25][26][27].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Miller [2] and Basharin [3] were among the first to study nonparametric estimation of H. Since then, the topic has been investigated from a variety of directions and perspectives. Many important references can be found in [4] and [5]. In this paper, we introduce a modification of an estimator of entropy, which was first defined by Zhang in [6].…”
Section: Introductionmentioning
confidence: 99%
“…Simple estimators of entropy have low variances but high biases that are difficult to calculate due to the divergence of the logarithm near zero [1]. Developments driven in part by computational biology applications have solved this problem in the moderately undersampled regime, N ∼ K and N ∼ e H [1][2][3][4][5][6][7][8][9]. Interestingly, they also resulted in the understanding that it is impossible to estimate entropy with zero bias uniformly over all distributions for a smaller N .…”
Section: Introductionmentioning
confidence: 99%