Abstract-Belief and vulnerability have been proposed recently to quantify information flow in security systems. Both concepts stand as alternatives to the traditional approaches founded on Shannon entropy and mutual information, which were shown to provide inadequate security guarantees. In this paper we unify the two concepts in one model so as to cope with (potentially inaccurate) attackers' extra knowledge. To this end we propose a new metric based on vulnerability that takes into account the adversary's beliefs.Keywords-Security; information hiding, information flow; quantitative and probabilistic models; uncertainty; accuracy;
I. IProtecting sensitive and confidential data is becoming increasingly important in many fields of human activities, such as electronic communication, auction, payment and voting. Many protocols for protecting confidential information have been proposed in the literature. In recent years the frameworks for reasoning, designing, and verifying these protocols have considered probabilistic aspects and techniques for two reasons. First, the data to be protected often range in domains naturally subject to statistical considerations. Second and more important, the protocols often use randomised primitives to obfuscate the link between the information to be protected and the observable outcomes. This is the case, e.g., of the DCNets [8], Crowds [31], Onion Routing [37], and Freenet [13].From the formal point of view, the degree of protection is the converse of the leakage, i.e. the amount of information about the secrets that can be deduced from the observables. Early approaches to information hiding in literature were the so-called possibilistic approaches, in which the probabilistic aspects were abstracted away and replaced by nondeterminism. Some examples of these approaches are those based on epistemic logic [20], [36], on function views [22], and on process calculi [32], [33]. Recently, however, it has been recognised that the possibilistic view is too coarse, in that it tends to consider as equivalent systems which have very different degrees of protection.The probabilistic approaches are therefore becoming increasingly more popular. At the beginning they were investigated mainly at their strongest form of protection, namely to express the property that the observables reveal no (quantitative) information about the secrets (strong anonymity, no interference) [2], [8], [20]. More recently, weaker notions of protection have been considered, due to the fact that such strong properties are almost never achievable in practice. Still in the probabilistic framework, Rubin and Reiter have proposed the concepts of possible innocence and of probable innocence [31] as weak notions of anonymity protection (see also [4] for a generalisation of the latter). These are, however, still true-or-false properties. The need to express in a quantitative way the degree of protection has then lead naturally to explore suitable notions within the wellestablished fields of Information Theory and of ...