2017
DOI: 10.1007/978-3-319-71504-9_25
|View full text |Cite
|
Sign up to set email alerts
|

Weighted Entropy and its Use in Computer Science and Beyond

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…H w (w1,….wn, wn+1, p1,….pn, 0) = H w (w1,….wn, wn+1, p1,….pn) = H w (X) for any wn+1 The properties of the weighted entropy apply only to the discrete case. In the continuous case there are difficulties discussed in (Kelbert, Stuhl, & Suhov, 2017). On the other hand, the continuous or differential version can also be implemented quite easily, provided adequate precautions are taken to double-check the consistency of the results.…”
Section: A Weighting the Various Parts Of The Pdfmentioning
confidence: 99%
“…H w (w1,….wn, wn+1, p1,….pn, 0) = H w (w1,….wn, wn+1, p1,….pn) = H w (X) for any wn+1 The properties of the weighted entropy apply only to the discrete case. In the continuous case there are difficulties discussed in (Kelbert, Stuhl, & Suhov, 2017). On the other hand, the continuous or differential version can also be implemented quite easily, provided adequate precautions are taken to double-check the consistency of the results.…”
Section: A Weighting the Various Parts Of The Pdfmentioning
confidence: 99%
“…In our opinion, the objective of achieving maximum entropy in RL is to learn a nearly optimal solution which preserves multi-modality. Our work is motivated by weighted entropy(WE) [22,34], which takes into account values of different outcomes, i.e., makes entropy context-dependent, through the weight function [35]. Zhao et al [36] proposed the similar idea of maximizing the entropy weighted by rewards, whereas we aim to design more arbitrary weight function and more flexible algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…This paper represents an extended version of an earlier note [10]. 1 We also follow earlier publications discussing related topics: [20,21,19,18].…”
Section: Introductionmentioning
confidence: 98%
“…
This paper represents an extended version of an earlier note [10]. The concept of weighted entropy takes into account values of different outcomes, i.e., makes entropy contextdependent, through the weight function.
…”
mentioning
confidence: 99%