2022
DOI: 10.3934/dcds.2022026
|View full text |Cite
|
Sign up to set email alerts
|

On information gain, Kullback-Leibler divergence, entropy production and the involution kernel

Abstract: <p style='text-indent:20px;'>It is well known that in Information Theory and Machine Learning the Kullback-Leibler divergence, which extends the concept of Shannon entropy, plays a fundamental role. Given an <i>a priori</i> probability kernel <inline-formula><tex-math id="M1">\begin{document}$ \hat{\nu} $\end{document}</tex-math></inline-formula> and a probability <inline-formula><tex-math id="M2">\begin{document}$ \pi $\end{document}</tex-math></i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 37 publications
(107 reference statements)
0
10
0
Order By: Relevance
“…Here we are interested in the Kullback-Leibler divergence (KL-divergence for short) of shift invariant probabilities (see (17) in [37])…”
Section: Kl-divergence and Dynamical Information Projectionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Here we are interested in the Kullback-Leibler divergence (KL-divergence for short) of shift invariant probabilities (see (17) in [37])…”
Section: Kl-divergence and Dynamical Information Projectionsmentioning
confidence: 99%
“…From a Bayesian point of view, the probability µ 1 describes the prior probability and µ λ plays the role of the posterior probability in the inductive inference problem described by expression D KL (µ λ , µ 1 ) (see Section 2.10 in [10], [37] and [20]). The function log J λ − log J 1 should be considered as the likelihood function (see [23]).…”
Section: Kl-divergence and Dynamical Information Projectionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The relative entropy is also known as the Kullback-Leibler divergence. For proofs of general results on the topic in the case of the shift we refer the reader to [8] for the case the alphabet is finite and for [28] in the case the alphabet is a compact metric space. We refer the reader to [21] for an application of Kullback-Leibler divergence in statistics.…”
Section: Preliminariesmentioning
confidence: 99%
“…In [1], using properties of the involution kernel the authors proved Large Deviation Theorems for the zero-temperature limit in the case there exist selection ( [33] considers the case where it is not assumed selection). In another direction, in [24] it is shown that the involution kernel appears as a natural tool for the investigation of entropy production of µ A . If the potential A is symmetric then the entropy production of µ A is zero (see Section 7 in [24]).…”
Section: Introductionmentioning
confidence: 99%