2017
DOI: 10.1109/tit.2017.2733537
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Likelihood Estimation of Functionals of Discrete Distributions

Abstract: We consider the problem of estimating functionals of discrete distributions, and focus on tight nonasymptotic analysis of the worst case squared error risk of widely used estimators. We apply concentration inequalities to analyze the random fluctuation of these estimators around their expectations, and the theory of approximation using positive linear operators to analyze the deviation of their expectations from the true functional, namely their \emph{bias}. We characterize the worst case squared error risk … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
92
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 90 publications
(94 citation statements)
references
References 75 publications
2
92
0
Order By: Relevance
“…Our work (including the companion paper [29]) is the first to obtain the minimax rates, minimax rate-optimal estimators, and the maximum risk of MLE for estimating F α ( P ), 0 < α < 3/2, and entropy H ( P ) in the most comprehensive regime of ( S, n ) pairs. Evident from Table I is the fact that the MLE cannot achieve the minimax rates for estimation of H ( P ), and F α ( P ) when 0 < α < 3/2.…”
Section: Introduction and Main Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Our work (including the companion paper [29]) is the first to obtain the minimax rates, minimax rate-optimal estimators, and the maximum risk of MLE for estimating F α ( P ), 0 < α < 3/2, and entropy H ( P ) in the most comprehensive regime of ( S, n ) pairs. Evident from Table I is the fact that the MLE cannot achieve the minimax rates for estimation of H ( P ), and F α ( P ) when 0 < α < 3/2.…”
Section: Introduction and Main Resultsmentioning
confidence: 99%
“…It was shown in the companion paper [29] that for n ≳ S , the maximum risk of the MLE H ( P n ) can be written as supPSEPfalse(Hfalse(Pnfalse)Hfalse(Pfalse)false)2S2n2+(lnS)2n, where the first term corresponds to the squared bias (defined as false(EPHfalse(Pnfalse)Hfalse(Pfalse)false)2, and the second term corresponds to the variance (defined as EPfalse(Hfalse(Pnfalse)EPHfalse(Pnfalse)false)2). Then we can understand this mystery: when we fix S and let n → ∞, the variance dominates and we get the expression in (17).…”
Section: Motivation Methodology and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There exist extensive literature on this subject, and we refer to [22] for a detailed review, as well as the theory and Matlab/Python implementations of entropy and mutual information estimators that achieve the minimax rates in all the regimes of sample size and support size pairs. For the recent growing literature on information measure estimation in the high-dimensional regime, we refer to [23], [24], [25], [22], [26], [27], [28], [29].…”
Section: Problem Formulation and Main Resultsmentioning
confidence: 99%
“…and we note that V (P (t) 0 ) in fact does not depend on t. With the help of (29), we plug R(λ, t) =R(λ, t) + λU (t) into (27), and obtaiñ…”
Section: A the Case N ≥mentioning
confidence: 99%