In this paper, we identify a class of absolutely continuous probability distributions, and show that the differential entropy is uniformly convergent over this space under the metric of total variation distance. One of the advantages of this class is that the requirements could be readily verified for a given distribution.
The sparsity and compressibility of finite-dimensional signals are of great interest in fields such as compressed sensing. The notion of compressibility is also extended to infinite sequences of i.i.d. or ergodic random variables based on the observed error in their nonlinear k-term approximation. In this work, we use the entropy measure to study the compressibility of continuous-domain innovation processes (alternatively known as white noise). Specifically, we define such a measure as the entropy limit of the doubly quantized (time and amplitude) process. This provides a tool to compare the compressibility of various innovation processes. It also allows us to identify an analogue of the concept of "entropy dimension" which was originally defined by Rényi for random variables. Particular attention is given to stable and impulsive Poisson innovation processes. Here, our results recognize Poisson innovations as the more compressible ones with an entropy measure far below that of stable innovations. While this result departs from the previous knowledge regarding the compressibility of fat-tailed distributions, our entropy measure ranks stable innovations according to their tail decay. Index TermsCompressibility, entropy, impulsive Poisson process, stable innovation, white Lévy noise. Symbol DefinitionSets Caligraphic letters like A, C, D, . . . Real and natural numbers R, NBorel sets in R B(R) or just B Random variablesCapital letters like: A, X, Y, Z, . . . Probability density function (pdf) of (continous) X p X or q X (lower-case letter p)Probability mass function (pmf) of (discrete) X P X (upper-case letter P )Cumulative distribution function (cdf) of X F X X 0 for a given white noise X(t) 1 0 X(t) dt; a random variable Definition 2 (Discrete Random Variable). [23] A random variable X is called discrete if it takes values in a countable alphabet set X ⊂ R. Definition 3 (Discrete-Continuous Random Variable).[23] A random variable X is called discrete-continuous with parameters (p c , P D , Pr {X ∈ D}) if there exists a countable set D, a discrete probability mass function P D , whose support is D, and a pdf p c ∈ AC such that 0 < Pr {X ∈ D} < 1,
Novel approaches to estimate information measures using neural networks are well-celebrated in recent years both in the information theory and machine learning communities. These neural-based estimators are shown to converge to the true values when estimating mutual information and conditional mutual information using independent samples. However, if the samples in the dataset are not independent, the consistency of these estimators requires further investigation. This is of particular interest for a more complex measure such as the directed information, which is pivotal in characterizing causality and is meaningful over time-dependent variables. The extension of the convergence proof for such cases is not trivial and demands further assumptions on the data. In this paper, we show that our neural estimator for conditional mutual information is consistent when the dataset is generated with samples of a stationary and ergodic source. {In other words, we show that our information estimator using neural networks converges asymptotically to the true value with probability one. Besides universal functional approximation of neural networks, a core lemma to show the convergence is Birkhoff's ergodic theorem. Additionally, we use the technique to estimate directed information and demonstrate the effectiveness of our approach in simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.