Abstract.Cencov has shown that Riemannian metrics which are derived from the Fisher information matrix are the only metrics which preserve inner products under certain probabilistically important mappings. In Cencov's theorem, the underlying differentiable manifold is the probability simplex E"x, = 1, x¡ > 0. For some purposes of using geometry to obtain insights about probability, it is more convenient to regard the simplex as a hypersurface in the positive cone. In the present paper Cencov's result is extended to the positive cone. The proof uses standard techniques of differential geometry but does not use the language of category theory.1. Introduction. There has been a good deal of interest in the use of differential geometry to interpret certain operations on probability distributions in statistics [2,4,5], biomathematics [9], thermodynamics [7], and information theory [3]. Further references are to be found in those cited above. Much of this literature begins by introducing a Riemannian metric which is generated by the Fisher information matrix. One reason for singling out this particular metric is to be found in a theorem of Cencov [4, Theorem 11.1 or Lemma 11.3], which characterizes this information metric on the probability simplex as the only metric having a certain invariance property under some probabilistically natural mappings.In this paper, we develop a characterization theorem which is closely related to Cencov's. The principal difference between the two theorems is that we characterize Riemannian metrics on the positive cones R^ = {x = (xx,..., x"): x¡ > 0}, while Cencov characterizes them on the probability Simplexes Sn_x = {x g R^:Ex, = 1}. As will be seen later, the connection between geometry and probability is enhanced if Sn_i is regarded as a surface in the differentiable manifold R*. In addition, some of Shahshahani's development [9] requires a metric on R+. A second difference from Cencov's theorem is that neither the statement of our result nor the proof use the language of category theory.
Abstract-In this work, we provide a computable expression for the Kullback-Leibler divergence rate lim ( ) between two time-invariant finite-alphabet Markov sources of arbitrary order and arbitrary initial distributions described by the probability distributions and , respectively. We illustrate it numerically and examine its rate of convergence. The main tools used to obtain the Kullback-Leibler divergence rate and its rate of convergence are the theory of nonnegative matrices and Perron-Frobenius theory. Similarly, we provide a formula for the Shannon entropy rate lim ( ) of Markov sources and examine its rate of convergence.
In this work, we examine the existence and the computation of the Rényi divergence rate, lim (), between two time-invariant finite-alphabet Markov sources of arbitrary order and arbitrary initial distributions described by the probability distributions and , respectively. This yields a generalization of a result of Nemetz where he assumed that the initial probabilities under and are strictly positive. The main tools used to obtain the Rényi divergence rate are the theory of nonnegative matrices and Perron-Frobenius theory. We also provide numerical examples and investigate the limits of the Rényi divergence rate as 1 and as 0. Similarly, we provide a formula for the Rényi entropy rate lim () of Markov sources and examine its limits as 1 and as 0. Finally, we briefly provide an application to source coding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.