The so-called leakage-chain rule is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable X in case the adversary who having already learned some random variables Z1, . . . , Z ℓ correlated with X, obtains some further information Z ℓ+1 about X. Analogously to the information-theoretic case, one might expect that also for the computational variants of entropy the loss depends only on the actual leakage, i.e. on Z ℓ+1 . Surprisingly, Krenn et al. have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in |(Z1, . . . , Z ℓ )|. This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred "in the past", which severely limits the applicability of this notion. As a remedy for this problem we propose a slightly stronger definition of the computational entropy, which we call the modulus computational entropy, and use it as a technical tool that allows us to prove a desired chain rule that depends only on the actual leakage and not on its history. Moreover, we show that the modulus computational entropy unifies other,sometimes seemingly unrelated, notions already studied in the literature in the context of information leakage and chain rules. Our results indicate that the modulus entropy is, up to now, the weakest restriction that guarantees that the chain rule for the computational entropy works. As an example of application we demonstrate a few interesting cases where our restricted definition is fulfilled and the chain rule holds.