2017
DOI: 10.15559/17-vmsta85
|View full text |Cite
|
Sign up to set email alerts
|

Weighted entropy: basic inequalities

Abstract: This paper represents an extended version of an earlier note [10]. The concept of weighted entropy takes into account values of different outcomes, i.e., makes entropy contextdependent, through the weight function. We analyse analogs of the Fisher information inequality and entropy power inequality for the weighted entropy and discuss connections with weighted Lieb's splitting inequality. The concepts of rates of the weighted entropy and information are also discussed.Keywords Weighted entropy, Gibbs inequalit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Shannon develops the concept of information entropy to express the randomness [24]. e information entropy describes the reliability, importance, and the weight of an influential factor [25]. In order to avoid the interference of human factors, we use entropy to weight attributes.…”
Section: Weight Methods Based On Information Entropymentioning
confidence: 99%
“…Shannon develops the concept of information entropy to express the randomness [24]. e information entropy describes the reliability, importance, and the weight of an influential factor [25]. In order to avoid the interference of human factors, we use entropy to weight attributes.…”
Section: Weight Methods Based On Information Entropymentioning
confidence: 99%
“…For simplicity, we use p(τ g ) to represent g e p R (τ g , g e | θ), which is the occurrence probability of the goal-state trajectory τ g . The expectation is calculated based on p(τ g ) as well, so the proposed objective is the weighted entropy (Guias ¸u, 1971;Kelbert et al, 2017) of τ g , which we denote as H w p (T g ), where the weight w is the accumulated reward…”
Section: Maximum Entropy-regularized Multi-goal Rlmentioning
confidence: 99%
“…Z is the normalization factor for q(τ g ). H w p (T g ) is the weighted entropy (Guias ¸u, 1971;Kelbert et al, 2017), where the weight is the accumulated reward T t=1 r(S t , G e ), in our case.…”
Section: Surrogate Objectivementioning
confidence: 99%
“…The notion proved useful in various applications and gave rise to many papers. We quote here but a few: Barbu et al [ 15 ], Batty [ 16 ], Das [ 17 ], Guiasu [ 18 ], Kayal [ 19 ], Kelbert et al [ 20 ], Smieja [ 21 ], Suhov [ 22 ], and Tunnicliffe et al [ 23 ], where the interested reader may find more details.…”
Section: Introductionmentioning
confidence: 99%