2016
DOI: 10.1109/tit.2016.2553147
|View full text |Cite
|
Sign up to set email alerts
|

Arbitrarily Tight Bounds on Differential Entropy of Gaussian Mixtures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(26 citation statements)
references
References 16 publications
0
26
0
Order By: Relevance
“…Herein, s m denotes the m-th symbol from M -ary constellation diagram and ρ i is the transmit power for i-th user. By using the tight approximation derived for Gaussian mixtures in [21], entropy of y i can be bounded as (γ + α N,N ) log 2 e + log 2 σ ≤ h(y i ) ≤ (γ + β N,N ) log 2 e + log 2 σ. Due to space constraints the detailed explanation on the parameters γ, α N,N and β N,N are not given here and can be found in [21].…”
Section: Achievable Ratementioning
confidence: 99%
“…Herein, s m denotes the m-th symbol from M -ary constellation diagram and ρ i is the transmit power for i-th user. By using the tight approximation derived for Gaussian mixtures in [21], entropy of y i can be bounded as (γ + α N,N ) log 2 e + log 2 σ ≤ h(y i ) ≤ (γ + β N,N ) log 2 e + log 2 σ. Due to space constraints the detailed explanation on the parameters γ, α N,N and β N,N are not given here and can be found in [21].…”
Section: Achievable Ratementioning
confidence: 99%
“…Note that this bound (called the Maximum Entropy Upper Bound in [13], MEUB) is tight when the GMM approximates a single Gaussian. It is fast to compute compared to the bound reported in [9] that uses Taylor' s expansion of the log-sum of the mixture density.…”
Section: Without Loss Of Generality Consider Gmms In the Formmentioning
confidence: 99%
“…Again, the Bregman divergence B F * (α : α ) is necessarily finite but KL(m(x) : m (x)) between mixtures may be potentially infinite when the KL integral diverges (hence, the condition on Jeffreys divergence finiteness). Interestingly, this Shannon information can be arbitrarily closely approximated when considering isotropic Gaussians [13]. Notice that the convex conjugate F(θ) of the continuous Shannon neg-entropy F * (η) is the log-sum-exp function on the inverse soft map.…”
Section: Appendix C On the Approximation Of Kl Between Smooth Mixturmentioning
confidence: 99%
See 2 more Smart Citations