2016 IEEE International Symposium on Information Theory (ISIT) 2016
DOI: 10.1109/isit.2016.7541399
|View full text |Cite
|
Sign up to set email alerts
|

Minimax estimation of the L<inf>1</inf> distance

Abstract: We consider the problem of estimating the L1 distance between two discrete probability measures P and Q from empirical data in a nonasymptotic and large alphabet setting. When Q is known and one obtains n samples from P , we show that for every Q, the minimax rate-optimal estimator with n samples achieves performance comparable to that of the maximum likelihood estimator (MLE) with n ln n samples. When both P and Q are unknown, we construct minimax rateoptimal estimators whose worst case performance is essenti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
74
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
4

Relationship

4
4

Authors

Journals

citations
Cited by 44 publications
(76 citation statements)
references
References 30 publications
2
74
0
Order By: Relevance
“…Remark 2. The results on R EST (D, L, n) and R PLU (D, L, n) follow from [5]. The key contribution of this paper is the solution of R ACH (D, L, n), whose upper and lower bounds prove to be non-trivial.…”
Section: ) Effective Sample Size Enlargementmentioning
confidence: 62%
See 1 more Smart Citation
“…Remark 2. The results on R EST (D, L, n) and R PLU (D, L, n) follow from [5]. The key contribution of this paper is the solution of R ACH (D, L, n), whose upper and lower bounds prove to be non-trivial.…”
Section: ) Effective Sample Size Enlargementmentioning
confidence: 62%
“…Now we consider the second statement. We used the classical splitting operation [5] to represent random variable X as…”
Section: A Proof Of Lemmamentioning
confidence: 99%
“…Simple as it may sound, this methodology has a few drawbacks and ambiguities. In our recent work [24], we applied this general recipe to the estimation of 1 distance between two discrete distributions, where this recipe proves to be inadequate. In the estimation of the 1 distance, a bivariate function f (x, y) = |x − y| which is non-analytic in a segment was considered, which is completely different from the previous studies [1], [28]- [31] where a univariate function analytic everywhere except a point is always taken into consideration.…”
Section: A Background and Main Resultsmentioning
confidence: 99%
“…However, it is challenging to find the explicit form of the two-dimensional polynomial approximation for estimating KL divergence as shown in [25]. However, for some problems it is still worth exploring the two-dimensional approximation directly, as shown in [26], where no onedimensional approximation can achieve the minimax rate.…”
Section: Minimax Upper Bound Via Optimal Estimatormentioning
confidence: 99%