Abstract. The observable output of a probabilistic system that processes a secret input might reveal some information about that input. The system can be modelled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures can then quantify the amount of information leakage caused by the channel. But it turns out that the conventional channel representation, as a matrix, contains structure that is redundant with respect to that leakage, such as the labeling of columns, and columns that are scalar multiples of each other. We therefore introduce abstract channels by quotienting over those redundancies.A fundamental question for channels is whether one is worse than another, from a leakage point of view. But it is difficult to answer this question robustly, given the multitude of possible prior distributions and leakage measures. Indeed, there is growing recognition that different leakage measures are appropriate in different circumstances, leading to the recently proposed g-leakage measures, which use gain functions g to model the operational scenario in which a channel operates: the strong g-leakage pre-order requires that channel A never leak more than channel B, for any prior and any gain function. Here we show that, on abstract channels, the strong g-leakage pre-order is antisymmetric, and therefore a partial order.It was previously shown [1] that the strong g-leakage ordering is implied by a structural ordering called composition refinement, which requires that A = BR, for some channel R; but the converse was not established in full generality, left open as the so-called Coriaceous Conjecture. Using ideas from [2], we here confirm the Coriaceous Conjecture. Hence the strong g-leakage ordering and composition refinement coincide, giving our partial order both structural-and leakage-testing significance.
Abstract. Theories of quantitative information flow offer an attractive framework for analyzing confidentiality in practical systems, which often cannot avoid "small" leaks of confidential information. Recently there has been growing interest in the theory of min-entropy leakage, which measures uncertainty based on a random variable's vulnerability to being guessed in one try by an adversary. Here we contribute to this theory by studying the min-entropy leakage of systems formed by cascading two channels together, using the output of the first channel as the input to the second channel. After considering the semantics of cascading carefully and exposing some technical subtleties, we prove that the min-entropy leakage of a cascade of two channels cannot exceed the leakage of the first channel; this result is a min-entropy analogue of the classic data-processing inequality. We show however that a comparable bound does not hold for the second channel. We then consider the min-capacity, or maximum leakage over all a priori distributions, showing that the min-capacity of a cascade of two channels cannot exceed the min-capacity of either channel.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.