Most entropy notions H(.) like Shannon or min-entropy satisfy a chain rule stating that for random variables X, Z and A we have H(X|Z, A) ≥ H(X|Z) − |A|. That is, by conditioning on A the entropy of X can decrease by at most the bitlength |A| of A. Such chain rules are known to hold for some computational entropy notions like Yao's and unpredictability-entropy. For HILL entropy, the computational analogue of min-entropy, the chain rule is of special interest and has found many applications, including leakage-resilient cryptography, deterministic encryption and memory delegation. These applications rely on restricted special cases of the chain rule. Whether the chain rule for conditional HILL entropy holds in general was an open problem for which we give a strong negative answer: We construct joint distributions (X, Z, A), where A is a distribution over a single bit, such that the HILL entropy H HILL (X|Z) is large but H HILL (X|Z, A) is basically zero.Our counterexample just makes the minimal assumption that NP P/poly. Under the stronger assumption that injective one-way function exist, we can make all the distributions efficiently samplable.Finally, we show that some more sophisticated cryptographic objects like lossy functions can be used to sample a distribution constituting a counterexample to the chain rule making only a single invocation to the underlying object. * This is the full version of [KPW13] that appeared in TCC 2013.