In this paper, we evaluate clock signals generated in ring oscillators and self-timed rings and the way their jitter can be transformed into random numbers. We show that counting the periods of the jittery clock signal produces random numbers of significantly better quality than the methods in which the jittery signal is simply sampled (the case in almost all current methods). Moreover, we use the counter values to characterize and continuously monitor the source of randomness. However, instead of using the widely used statistical variance, we propose to use Allan variance to do so. There are two main advantages: Allan variance is insensitive to low frequency noises such as flicker noise that are known to be autocorrelated and significantly less circuitry is required for its computation than that used to compute commonly used variance. We also show that it is essential to use a differential principle of randomness extraction from the jitter based on the use of two identical oscillators to avoid autocorrelations originating from external and internal global jitter sources and that this fact is valid for both kinds of rings. Last but not least, we propose a method of statistical testing based on high order Markov model to show the reduced dependencies when the proposed randomness extraction is applied.
Physical side-channel leakages are an important threat for cryptographic implementations. One of the most prominent countermeasures against such leakage attacks is the use of a masking scheme. A masking scheme conceals the sensitive information by randomizing intermediate values thereby making the physical leakage independent of the secret. An important practical leakage model to analyze the security of a masking scheme is the so-called noisy leakage model of Prouff and Rivain (Eurocrypt'13). Unfortunately, security proofs in the noisy leakage model require a technically involved information theoretic argument. Very recently, Duc et al. (Eurocrypt'14) showed that security in the probing model of Ishai et al. (Crypto'03) implies security in the noisy leakage model. Unfortunately, the reduction to the probing model is non-tight and requires a rather counter-intuitive growth of the amount of noise, i.e., the Prouff-Rivain bias parameter decreases proportional to the size of the set X of the elements that are leaking (e.g., if the leaking elements are bytes, then |X | = 256). The main contribution of our work is to eliminate this non-optimality in the reduction by introducing an alternative leakage model, that we call the average probing model. We show a tight reduction between the noisy leakage model and the much simpler average random probing model; in fact, we show that these two models are essentially equivalent. We demonstrate the potential of this equivalence by two applications:-We show security of the additive masking scheme used in many previous works for a constant bias parameter. -We show that the compiler of Ishai et al. (Crypto'03) is secure in the average probing model (assuming a simple leak free component). This results into security with an optimal bias parameter of the noisy leakage for the ISW construction.
The so-called leakage-chain rule is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable X in case the adversary who having already learned some random variables Z1, . . . , Z ℓ correlated with X, obtains some further information Z ℓ+1 about X. Analogously to the information-theoretic case, one might expect that also for the computational variants of entropy the loss depends only on the actual leakage, i.e. on Z ℓ+1 . Surprisingly, Krenn et al. have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in |(Z1, . . . , Z ℓ )|. This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred "in the past", which severely limits the applicability of this notion. As a remedy for this problem we propose a slightly stronger definition of the computational entropy, which we call the modulus computational entropy, and use it as a technical tool that allows us to prove a desired chain rule that depends only on the actual leakage and not on its history. Moreover, we show that the modulus computational entropy unifies other,sometimes seemingly unrelated, notions already studied in the literature in the context of information leakage and chain rules. Our results indicate that the modulus entropy is, up to now, the weakest restriction that guarantees that the chain rule for the computational entropy works. As an example of application we demonstrate a few interesting cases where our restricted definition is fulfilled and the chain rule holds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.