If there are two dependent positive real variables x1 and x2, and only x1 is known, what is the probability that x2 is larger versus smaller than x1? There is no uniquely correct answer according to “frequentist” and “subjective Bayesian” definitions of probability. Here we derive the answer given the “objective Bayesian” definition developed by Jeffreys, Cox, and Jaynes. We declare the standard distance metric in one dimension, d(A,B)≡|A−B|, and the uniform prior distribution, as axioms. If neither variable is known, P(x2<x1)=P(x2>x1). This appears obvious, since the state spaces x2<x1 and x2>x1 have equal size. However, if x1 is known and x2 unknown, there are infinitely more numbers in the space x2>x1 than x2<x1. Despite this asymmetry, we prove P(x2<x1∣x1)=P(x2>x1∣x1), so that x1 is the median of p(x2|x1), and x1 is statistically independent of ratio x2/x1. We present three proofs that apply to all members of a set of distributions. Each member is distinguished by the form of dependence between variables implicit within a statistical model (gamma, Gaussian, etc.), but all exhibit two symmetries in the joint distribution p(x1,x2) that are required in the absence of prior information: exchangeability of variables, and non-informative priors over the marginal distributions p(x1) and p(x2). We relate our conclusion to physical models of prediction and intelligence, where the known ’sample’ could be the present internal energy within a sensor, and the unknown the energy in its external sensory cause or future motor effect.