We define the lower and upper mutual dimensions mdim(x : y) and Mdim(x : y) between any two points x and y in Euclidean space. Intuitively, these are the lower and upper densities of the algorithmic information shared by x and y. We show that these quantities satisfy the main desiderata for a satisfactory measure of mutual algorithmic information. Our main theorem, the data processing inequality for mutual dimension, says that if f : R m → R n is computable and Lipschitz, then the inequalities mdim( f (x) : y) ≤ mdim(x : y) and Mdim( f (x) : y) ≤ Mdim(x : y) hold for all x ∈ R m and y ∈ R t . We use this inequality and related inequalities that we prove in like fashion to establish conditions under which various classes of computable functions on Euclidean space preserve or otherwise transform mutual dimensions between points.
If S and T are infinite sequences over a finite alphabet, then the lower and upper mutual dimensions mdim(S : T ) and M dim(S : T ) are the upper and lower densities of the algorithmic information that is shared by S and T . In this paper we investigate the relationships between mutual dimension and coupled randomness, which is the algorithmic randomness of two sequences R 1 and R 2 with respect to probability measures that may be dependent on one another. For a restricted but interesting class of coupled probability measures we prove an explicit formula for the mutual dimensions mdim(R 1 : R 2 ) and M dim(R 1 : R 2 ), and we show that the condition M dim(R 1 : R 2 ) = 0 is necessary but not sufficient for R 1 and R 2 to be independently random.We also identify conditions under which Billingsley generalizations of the mutual dimensions mdim(S : T ) and M dim(S : T ) can be meaningfully defined; we show that under these conditions these generalized mutual dimensions have the "correct" relationships with the Billingsley generalizations of dim(S), Dim(S), dim(T ), and Dim(T ) that were developed and applied by Lutz and Mayordomo; and we prove a divergence formula for the values of these generalized mutual dimensions.
If S and T are infinite sequences over a finite alphabet, then the lower and upper mutual dimensions mdim(S : T ) and M dim(S : T ) are the upper and lower densities of the algorithmic information that is shared by S and T . In this paper we investigate the relationships between mutual dimension and coupled randomness, which is the algorithmic randomness of two sequences R 1 and R 2 with respect to probability measures that may be dependent on one another. For a restricted but interesting class of coupled probability measures we prove an explicit formula for the mutual dimensions mdim(R 1 : R 2 ) and M dim(R 1 : R 2 ), and we show that the condition M dim(R 1 : R 2 ) = 0 is necessary but not sufficient for R 1 and R 2 to be independently random.We also identify conditions under which Billingsley generalizations of the mutual dimensions mdim(S : T ) and M dim(S : T ) can be meaningfully defined; we show that under these conditions these generalized mutual dimensions have the "correct" relationships with the Billingsley generalizations of dim(S), Dim(S), dim(T ), and Dim(T ) that were developed and applied by Lutz and Mayordomo; and we prove a divergence formula for the values of these generalized mutual dimensions.Informally, we say that α S is the real representation of S. Note that, in this section, we often use the notation S ↾ r to mean the first r ∈ N symbols of a sequence S.We begin by reviewing some definitions and theorems of algorithmic information theory. All Turing machines are assumed to be self-delimiting.
A data processing inequality states that the quantity of shared information between two entities (e.g. signals, strings) cannot be significantly increased when one of the entities is processed by certain kinds of transformations. In this paper, we prove several data processing inequalities for sequences, where the transformations are bounded Turing functionals and the shared information is measured by the lower and upper mutual dimensions between sequences.We show that, for all sequences X, Y, and Z, if Z is computable Lipschitz reducible to X, thenWe also show how to derive different data processing inequalities by making adjustments to the computable bounds of the use of a Turing functional. The yield of a Turing functional Φ S with access to at most n bits of the oracle S is the smallest input m ∈ N such that Φ S↾n (m) ↑. We show how to derive reverse data processing inequalities (i.e., data processing inequalities where the transformation may significantly increase the shared information between two entities) for sequences by applying computable bounds to the yield of a Turing functional.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.